00:00:00.001 Started by upstream project "autotest-per-patch" build number 132716 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.038 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.039 The recommended git tool is: git 00:00:00.039 using credential 00000000-0000-0000-0000-000000000002 00:00:00.045 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.067 Fetching changes from the remote Git repository 00:00:00.070 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.097 Using shallow fetch with depth 1 00:00:00.097 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.097 > git --version # timeout=10 00:00:00.136 > git --version # 'git version 2.39.2' 00:00:00.136 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.221 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.221 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.909 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.923 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.935 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.935 > git config core.sparsecheckout # timeout=10 00:00:03.948 > git read-tree -mu HEAD # timeout=10 00:00:03.963 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.989 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.989 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.222 [Pipeline] Start of Pipeline 00:00:04.236 [Pipeline] library 00:00:04.237 Loading library shm_lib@master 00:00:04.237 Library shm_lib@master is cached. Copying from home. 00:00:04.256 [Pipeline] node 00:00:04.267 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest_3 00:00:04.269 [Pipeline] { 00:00:04.280 [Pipeline] catchError 00:00:04.282 [Pipeline] { 00:00:04.295 [Pipeline] wrap 00:00:04.302 [Pipeline] { 00:00:04.311 [Pipeline] stage 00:00:04.313 [Pipeline] { (Prologue) 00:00:04.332 [Pipeline] echo 00:00:04.334 Node: VM-host-WFP7 00:00:04.341 [Pipeline] cleanWs 00:00:04.352 [WS-CLEANUP] Deleting project workspace... 00:00:04.352 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.359 [WS-CLEANUP] done 00:00:04.577 [Pipeline] setCustomBuildProperty 00:00:04.684 [Pipeline] httpRequest 00:00:05.056 [Pipeline] echo 00:00:05.058 Sorcerer 10.211.164.101 is alive 00:00:05.068 [Pipeline] retry 00:00:05.070 [Pipeline] { 00:00:05.084 [Pipeline] httpRequest 00:00:05.089 HttpMethod: GET 00:00:05.089 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.090 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.090 Response Code: HTTP/1.1 200 OK 00:00:05.091 Success: Status code 200 is in the accepted range: 200,404 00:00:05.091 Saving response body to /var/jenkins/workspace/raid-vg-autotest_3/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.099 [Pipeline] } 00:00:06.115 [Pipeline] // retry 00:00:06.121 [Pipeline] sh 00:00:06.413 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.429 [Pipeline] httpRequest 00:00:06.798 [Pipeline] echo 00:00:06.800 Sorcerer 10.211.164.101 is alive 00:00:06.810 [Pipeline] retry 00:00:06.813 [Pipeline] { 00:00:06.827 [Pipeline] httpRequest 00:00:06.831 HttpMethod: GET 00:00:06.832 URL: http://10.211.164.101/packages/spdk_eec61894813d3232c06044a3a6cd4dc2076c84bc.tar.gz 00:00:06.832 Sending request to url: http://10.211.164.101/packages/spdk_eec61894813d3232c06044a3a6cd4dc2076c84bc.tar.gz 00:00:06.833 Response Code: HTTP/1.1 200 OK 00:00:06.833 Success: Status code 200 is in the accepted range: 200,404 00:00:06.834 Saving response body to /var/jenkins/workspace/raid-vg-autotest_3/spdk_eec61894813d3232c06044a3a6cd4dc2076c84bc.tar.gz 00:00:27.356 [Pipeline] } 00:00:27.375 [Pipeline] // retry 00:00:27.384 [Pipeline] sh 00:00:27.669 + tar --no-same-owner -xf spdk_eec61894813d3232c06044a3a6cd4dc2076c84bc.tar.gz 00:00:30.217 [Pipeline] sh 00:00:30.498 + git -C spdk log --oneline -n5 00:00:30.498 eec618948 lib/reduce: Unmap backing dev blocks 00:00:30.498 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:00:30.498 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:00:30.498 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:00:30.498 e2dfdf06c accel/mlx5: Register post_poller handler 00:00:30.515 [Pipeline] writeFile 00:00:30.529 [Pipeline] sh 00:00:30.810 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:30.820 [Pipeline] sh 00:00:31.100 + cat autorun-spdk.conf 00:00:31.100 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.100 SPDK_RUN_ASAN=1 00:00:31.100 SPDK_RUN_UBSAN=1 00:00:31.100 SPDK_TEST_RAID=1 00:00:31.100 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:31.108 RUN_NIGHTLY=0 00:00:31.109 [Pipeline] } 00:00:31.122 [Pipeline] // stage 00:00:31.136 [Pipeline] stage 00:00:31.138 [Pipeline] { (Run VM) 00:00:31.181 [Pipeline] sh 00:00:31.463 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:31.463 + echo 'Start stage prepare_nvme.sh' 00:00:31.463 Start stage prepare_nvme.sh 00:00:31.463 + [[ -n 5 ]] 00:00:31.463 + disk_prefix=ex5 00:00:31.463 + [[ -n /var/jenkins/workspace/raid-vg-autotest_3 ]] 00:00:31.463 + [[ -e /var/jenkins/workspace/raid-vg-autotest_3/autorun-spdk.conf ]] 00:00:31.463 + source /var/jenkins/workspace/raid-vg-autotest_3/autorun-spdk.conf 00:00:31.464 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.464 ++ SPDK_RUN_ASAN=1 00:00:31.464 ++ SPDK_RUN_UBSAN=1 00:00:31.464 ++ SPDK_TEST_RAID=1 00:00:31.464 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:31.464 ++ RUN_NIGHTLY=0 00:00:31.464 + cd /var/jenkins/workspace/raid-vg-autotest_3 00:00:31.464 + nvme_files=() 00:00:31.464 + declare -A nvme_files 00:00:31.464 + backend_dir=/var/lib/libvirt/images/backends 00:00:31.464 + nvme_files['nvme.img']=5G 00:00:31.464 + nvme_files['nvme-cmb.img']=5G 00:00:31.464 + nvme_files['nvme-multi0.img']=4G 00:00:31.464 + nvme_files['nvme-multi1.img']=4G 00:00:31.464 + nvme_files['nvme-multi2.img']=4G 00:00:31.464 + nvme_files['nvme-openstack.img']=8G 00:00:31.464 + nvme_files['nvme-zns.img']=5G 00:00:31.464 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:31.464 + (( SPDK_TEST_FTL == 1 )) 00:00:31.464 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:31.464 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:31.464 + for nvme in "${!nvme_files[@]}" 00:00:31.464 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:00:31.464 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:31.464 + for nvme in "${!nvme_files[@]}" 00:00:31.464 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:00:31.464 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:31.464 + for nvme in "${!nvme_files[@]}" 00:00:31.464 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:00:31.464 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:31.464 + for nvme in "${!nvme_files[@]}" 00:00:31.464 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:00:31.464 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:31.464 + for nvme in "${!nvme_files[@]}" 00:00:31.464 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:00:31.464 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:31.464 + for nvme in "${!nvme_files[@]}" 00:00:31.464 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:00:31.464 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:31.464 + for nvme in "${!nvme_files[@]}" 00:00:31.464 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:00:31.723 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:31.723 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:00:31.723 + echo 'End stage prepare_nvme.sh' 00:00:31.723 End stage prepare_nvme.sh 00:00:31.733 [Pipeline] sh 00:00:32.016 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:32.016 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:00:32.016 00:00:32.016 DIR=/var/jenkins/workspace/raid-vg-autotest_3/spdk/scripts/vagrant 00:00:32.016 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_3/spdk 00:00:32.016 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_3 00:00:32.016 HELP=0 00:00:32.016 DRY_RUN=0 00:00:32.016 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:00:32.016 NVME_DISKS_TYPE=nvme,nvme, 00:00:32.016 NVME_AUTO_CREATE=0 00:00:32.016 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:00:32.016 NVME_CMB=,, 00:00:32.016 NVME_PMR=,, 00:00:32.016 NVME_ZNS=,, 00:00:32.016 NVME_MS=,, 00:00:32.016 NVME_FDP=,, 00:00:32.016 SPDK_VAGRANT_DISTRO=fedora39 00:00:32.016 SPDK_VAGRANT_VMCPU=10 00:00:32.016 SPDK_VAGRANT_VMRAM=12288 00:00:32.016 SPDK_VAGRANT_PROVIDER=libvirt 00:00:32.016 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:32.016 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:32.016 SPDK_OPENSTACK_NETWORK=0 00:00:32.016 VAGRANT_PACKAGE_BOX=0 00:00:32.016 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:00:32.016 FORCE_DISTRO=true 00:00:32.016 VAGRANT_BOX_VERSION= 00:00:32.016 EXTRA_VAGRANTFILES= 00:00:32.016 NIC_MODEL=virtio 00:00:32.016 00:00:32.016 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt' 00:00:32.016 /var/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_3 00:00:34.038 Bringing machine 'default' up with 'libvirt' provider... 00:00:34.606 ==> default: Creating image (snapshot of base box volume). 00:00:34.606 ==> default: Creating domain with the following settings... 00:00:34.606 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733477879_8be96b8bda8550fb76d0 00:00:34.606 ==> default: -- Domain type: kvm 00:00:34.606 ==> default: -- Cpus: 10 00:00:34.606 ==> default: -- Feature: acpi 00:00:34.606 ==> default: -- Feature: apic 00:00:34.606 ==> default: -- Feature: pae 00:00:34.606 ==> default: -- Memory: 12288M 00:00:34.606 ==> default: -- Memory Backing: hugepages: 00:00:34.606 ==> default: -- Management MAC: 00:00:34.606 ==> default: -- Loader: 00:00:34.606 ==> default: -- Nvram: 00:00:34.606 ==> default: -- Base box: spdk/fedora39 00:00:34.606 ==> default: -- Storage pool: default 00:00:34.606 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733477879_8be96b8bda8550fb76d0.img (20G) 00:00:34.606 ==> default: -- Volume Cache: default 00:00:34.606 ==> default: -- Kernel: 00:00:34.606 ==> default: -- Initrd: 00:00:34.606 ==> default: -- Graphics Type: vnc 00:00:34.606 ==> default: -- Graphics Port: -1 00:00:34.606 ==> default: -- Graphics IP: 127.0.0.1 00:00:34.606 ==> default: -- Graphics Password: Not defined 00:00:34.606 ==> default: -- Video Type: cirrus 00:00:34.606 ==> default: -- Video VRAM: 9216 00:00:34.606 ==> default: -- Sound Type: 00:00:34.606 ==> default: -- Keymap: en-us 00:00:34.606 ==> default: -- TPM Path: 00:00:34.606 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:34.606 ==> default: -- Command line args: 00:00:34.606 ==> default: -> value=-device, 00:00:34.606 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:34.606 ==> default: -> value=-drive, 00:00:34.606 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:00:34.606 ==> default: -> value=-device, 00:00:34.606 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:34.606 ==> default: -> value=-device, 00:00:34.606 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:34.606 ==> default: -> value=-drive, 00:00:34.606 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:34.606 ==> default: -> value=-device, 00:00:34.606 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:34.606 ==> default: -> value=-drive, 00:00:34.606 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:34.606 ==> default: -> value=-device, 00:00:34.606 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:34.606 ==> default: -> value=-drive, 00:00:34.606 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:34.606 ==> default: -> value=-device, 00:00:34.606 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:34.864 ==> default: Creating shared folders metadata... 00:00:34.865 ==> default: Starting domain. 00:00:36.242 ==> default: Waiting for domain to get an IP address... 00:00:54.339 ==> default: Waiting for SSH to become available... 00:00:54.340 ==> default: Configuring and enabling network interfaces... 00:00:59.614 default: SSH address: 192.168.121.117:22 00:00:59.614 default: SSH username: vagrant 00:00:59.614 default: SSH auth method: private key 00:01:02.150 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:10.285 ==> default: Mounting SSHFS shared folder... 00:01:11.660 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:11.660 ==> default: Checking Mount.. 00:01:13.564 ==> default: Folder Successfully Mounted! 00:01:13.564 ==> default: Running provisioner: file... 00:01:14.497 default: ~/.gitconfig => .gitconfig 00:01:15.064 00:01:15.064 SUCCESS! 00:01:15.064 00:01:15.064 cd to /var/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt and type "vagrant ssh" to use. 00:01:15.064 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:15.064 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt" to destroy all trace of vm. 00:01:15.064 00:01:15.073 [Pipeline] } 00:01:15.086 [Pipeline] // stage 00:01:15.095 [Pipeline] dir 00:01:15.096 Running in /var/jenkins/workspace/raid-vg-autotest_3/fedora39-libvirt 00:01:15.098 [Pipeline] { 00:01:15.111 [Pipeline] catchError 00:01:15.113 [Pipeline] { 00:01:15.127 [Pipeline] sh 00:01:15.410 + vagrant ssh-config --host vagrant 00:01:15.410 + sed -ne /^Host/,$p 00:01:15.410 + tee ssh_conf 00:01:17.937 Host vagrant 00:01:17.937 HostName 192.168.121.117 00:01:17.937 User vagrant 00:01:17.937 Port 22 00:01:17.937 UserKnownHostsFile /dev/null 00:01:17.937 StrictHostKeyChecking no 00:01:17.937 PasswordAuthentication no 00:01:17.937 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:17.937 IdentitiesOnly yes 00:01:17.937 LogLevel FATAL 00:01:17.937 ForwardAgent yes 00:01:17.937 ForwardX11 yes 00:01:17.937 00:01:17.953 [Pipeline] withEnv 00:01:17.956 [Pipeline] { 00:01:17.972 [Pipeline] sh 00:01:18.252 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:18.252 source /etc/os-release 00:01:18.252 [[ -e /image.version ]] && img=$(< /image.version) 00:01:18.252 # Minimal, systemd-like check. 00:01:18.252 if [[ -e /.dockerenv ]]; then 00:01:18.252 # Clear garbage from the node's name: 00:01:18.252 # agt-er_autotest_547-896 -> autotest_547-896 00:01:18.252 # $HOSTNAME is the actual container id 00:01:18.252 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:18.252 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:18.252 # We can assume this is a mount from a host where container is running, 00:01:18.252 # so fetch its hostname to easily identify the target swarm worker. 00:01:18.252 container="$(< /etc/hostname) ($agent)" 00:01:18.252 else 00:01:18.252 # Fallback 00:01:18.252 container=$agent 00:01:18.252 fi 00:01:18.253 fi 00:01:18.253 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:18.253 00:01:18.523 [Pipeline] } 00:01:18.539 [Pipeline] // withEnv 00:01:18.548 [Pipeline] setCustomBuildProperty 00:01:18.565 [Pipeline] stage 00:01:18.568 [Pipeline] { (Tests) 00:01:18.590 [Pipeline] sh 00:01:18.870 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:19.144 [Pipeline] sh 00:01:19.425 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:19.702 [Pipeline] timeout 00:01:19.703 Timeout set to expire in 1 hr 30 min 00:01:19.706 [Pipeline] { 00:01:19.723 [Pipeline] sh 00:01:20.015 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:20.605 HEAD is now at eec618948 lib/reduce: Unmap backing dev blocks 00:01:20.616 [Pipeline] sh 00:01:20.896 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:21.168 [Pipeline] sh 00:01:21.479 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:21.756 [Pipeline] sh 00:01:22.036 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:22.295 ++ readlink -f spdk_repo 00:01:22.295 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:22.295 + [[ -n /home/vagrant/spdk_repo ]] 00:01:22.295 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:22.295 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:22.295 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:22.295 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:22.295 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:22.295 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:22.295 + cd /home/vagrant/spdk_repo 00:01:22.295 + source /etc/os-release 00:01:22.295 ++ NAME='Fedora Linux' 00:01:22.295 ++ VERSION='39 (Cloud Edition)' 00:01:22.295 ++ ID=fedora 00:01:22.295 ++ VERSION_ID=39 00:01:22.295 ++ VERSION_CODENAME= 00:01:22.295 ++ PLATFORM_ID=platform:f39 00:01:22.295 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:22.295 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:22.295 ++ LOGO=fedora-logo-icon 00:01:22.295 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:22.295 ++ HOME_URL=https://fedoraproject.org/ 00:01:22.295 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:22.295 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:22.295 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:22.295 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:22.295 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:22.295 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:22.295 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:22.295 ++ SUPPORT_END=2024-11-12 00:01:22.295 ++ VARIANT='Cloud Edition' 00:01:22.295 ++ VARIANT_ID=cloud 00:01:22.295 + uname -a 00:01:22.295 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:22.295 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:22.862 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:22.862 Hugepages 00:01:22.862 node hugesize free / total 00:01:22.862 node0 1048576kB 0 / 0 00:01:22.862 node0 2048kB 0 / 0 00:01:22.862 00:01:22.862 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:22.862 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:22.862 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:22.862 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:22.862 + rm -f /tmp/spdk-ld-path 00:01:22.862 + source autorun-spdk.conf 00:01:22.862 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.862 ++ SPDK_RUN_ASAN=1 00:01:22.862 ++ SPDK_RUN_UBSAN=1 00:01:22.862 ++ SPDK_TEST_RAID=1 00:01:22.862 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:22.862 ++ RUN_NIGHTLY=0 00:01:22.862 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:22.862 + [[ -n '' ]] 00:01:22.862 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:22.862 + for M in /var/spdk/build-*-manifest.txt 00:01:22.862 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:22.862 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:22.862 + for M in /var/spdk/build-*-manifest.txt 00:01:22.862 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:22.862 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:23.121 + for M in /var/spdk/build-*-manifest.txt 00:01:23.121 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:23.121 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:23.121 ++ uname 00:01:23.121 + [[ Linux == \L\i\n\u\x ]] 00:01:23.121 + sudo dmesg -T 00:01:23.121 + sudo dmesg --clear 00:01:23.121 + dmesg_pid=5430 00:01:23.121 + [[ Fedora Linux == FreeBSD ]] 00:01:23.121 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.121 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.121 + sudo dmesg -Tw 00:01:23.121 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:23.121 + [[ -x /usr/src/fio-static/fio ]] 00:01:23.121 + export FIO_BIN=/usr/src/fio-static/fio 00:01:23.121 + FIO_BIN=/usr/src/fio-static/fio 00:01:23.121 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:23.121 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:23.121 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:23.121 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.121 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.121 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:23.121 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.121 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.121 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:23.121 09:38:48 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:23.121 09:38:48 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:23.121 09:38:48 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.121 09:38:48 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:23.121 09:38:48 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:23.121 09:38:48 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:23.121 09:38:48 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:23.121 09:38:48 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:23.121 09:38:48 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:23.121 09:38:48 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:23.380 09:38:48 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:23.380 09:38:48 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:23.380 09:38:48 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:23.380 09:38:48 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:23.380 09:38:48 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:23.380 09:38:48 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:23.380 09:38:48 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.380 09:38:48 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.380 09:38:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.380 09:38:48 -- paths/export.sh@5 -- $ export PATH 00:01:23.380 09:38:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.380 09:38:48 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:23.380 09:38:48 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:23.380 09:38:48 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733477928.XXXXXX 00:01:23.380 09:38:48 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733477928.YJsWaO 00:01:23.380 09:38:48 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:23.380 09:38:48 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:23.380 09:38:48 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:23.380 09:38:48 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:23.380 09:38:48 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:23.380 09:38:48 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:23.380 09:38:48 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:23.380 09:38:48 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.380 09:38:48 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:23.380 09:38:48 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:23.380 09:38:48 -- pm/common@17 -- $ local monitor 00:01:23.380 09:38:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.380 09:38:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.380 09:38:48 -- pm/common@25 -- $ sleep 1 00:01:23.380 09:38:48 -- pm/common@21 -- $ date +%s 00:01:23.380 09:38:48 -- pm/common@21 -- $ date +%s 00:01:23.380 09:38:48 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733477928 00:01:23.380 09:38:48 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733477928 00:01:23.380 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733477928_collect-cpu-load.pm.log 00:01:23.380 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733477928_collect-vmstat.pm.log 00:01:24.314 09:38:49 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:24.314 09:38:49 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:24.314 09:38:49 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:24.314 09:38:49 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:24.314 09:38:49 -- spdk/autobuild.sh@16 -- $ date -u 00:01:24.314 Fri Dec 6 09:38:49 AM UTC 2024 00:01:24.314 09:38:49 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:24.314 v25.01-pre-304-geec618948 00:01:24.314 09:38:49 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:24.314 09:38:49 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:24.314 09:38:49 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:24.314 09:38:49 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:24.314 09:38:49 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.314 ************************************ 00:01:24.314 START TEST asan 00:01:24.314 ************************************ 00:01:24.314 using asan 00:01:24.314 09:38:49 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:24.314 00:01:24.314 real 0m0.001s 00:01:24.314 user 0m0.000s 00:01:24.314 sys 0m0.000s 00:01:24.314 09:38:49 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:24.314 09:38:49 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:24.314 ************************************ 00:01:24.314 END TEST asan 00:01:24.314 ************************************ 00:01:24.314 09:38:49 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:24.314 09:38:49 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:24.314 09:38:49 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:24.314 09:38:49 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:24.314 09:38:49 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.314 ************************************ 00:01:24.314 START TEST ubsan 00:01:24.314 ************************************ 00:01:24.314 using ubsan 00:01:24.314 09:38:49 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:24.314 00:01:24.314 real 0m0.001s 00:01:24.314 user 0m0.000s 00:01:24.314 sys 0m0.000s 00:01:24.314 09:38:49 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:24.314 09:38:49 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:24.314 ************************************ 00:01:24.314 END TEST ubsan 00:01:24.314 ************************************ 00:01:24.571 09:38:49 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:24.571 09:38:49 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:24.571 09:38:49 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:24.571 09:38:49 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:24.571 09:38:49 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:24.571 09:38:49 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:24.571 09:38:49 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:24.571 09:38:49 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:24.571 09:38:49 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:24.571 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:24.571 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:25.136 Using 'verbs' RDMA provider 00:01:40.948 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:55.833 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:56.093 Creating mk/config.mk...done. 00:01:56.093 Creating mk/cc.flags.mk...done. 00:01:56.093 Type 'make' to build. 00:01:56.093 09:39:21 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:56.093 09:39:21 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:56.093 09:39:21 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:56.093 09:39:21 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.093 ************************************ 00:01:56.093 START TEST make 00:01:56.093 ************************************ 00:01:56.093 09:39:21 make -- common/autotest_common.sh@1129 -- $ make -j10 00:01:56.662 make[1]: Nothing to be done for 'all'. 00:02:08.867 The Meson build system 00:02:08.867 Version: 1.5.0 00:02:08.867 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:08.867 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:08.867 Build type: native build 00:02:08.867 Program cat found: YES (/usr/bin/cat) 00:02:08.867 Project name: DPDK 00:02:08.867 Project version: 24.03.0 00:02:08.867 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:08.867 C linker for the host machine: cc ld.bfd 2.40-14 00:02:08.867 Host machine cpu family: x86_64 00:02:08.867 Host machine cpu: x86_64 00:02:08.867 Message: ## Building in Developer Mode ## 00:02:08.867 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:08.867 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:08.867 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:08.867 Program python3 found: YES (/usr/bin/python3) 00:02:08.867 Program cat found: YES (/usr/bin/cat) 00:02:08.867 Compiler for C supports arguments -march=native: YES 00:02:08.867 Checking for size of "void *" : 8 00:02:08.867 Checking for size of "void *" : 8 (cached) 00:02:08.867 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:08.867 Library m found: YES 00:02:08.867 Library numa found: YES 00:02:08.867 Has header "numaif.h" : YES 00:02:08.867 Library fdt found: NO 00:02:08.867 Library execinfo found: NO 00:02:08.868 Has header "execinfo.h" : YES 00:02:08.868 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:08.868 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:08.868 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:08.868 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:08.868 Run-time dependency openssl found: YES 3.1.1 00:02:08.868 Run-time dependency libpcap found: YES 1.10.4 00:02:08.868 Has header "pcap.h" with dependency libpcap: YES 00:02:08.868 Compiler for C supports arguments -Wcast-qual: YES 00:02:08.868 Compiler for C supports arguments -Wdeprecated: YES 00:02:08.868 Compiler for C supports arguments -Wformat: YES 00:02:08.868 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:08.868 Compiler for C supports arguments -Wformat-security: NO 00:02:08.868 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:08.868 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:08.868 Compiler for C supports arguments -Wnested-externs: YES 00:02:08.868 Compiler for C supports arguments -Wold-style-definition: YES 00:02:08.868 Compiler for C supports arguments -Wpointer-arith: YES 00:02:08.868 Compiler for C supports arguments -Wsign-compare: YES 00:02:08.868 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:08.868 Compiler for C supports arguments -Wundef: YES 00:02:08.868 Compiler for C supports arguments -Wwrite-strings: YES 00:02:08.868 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:08.868 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:08.868 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:08.868 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:08.868 Program objdump found: YES (/usr/bin/objdump) 00:02:08.868 Compiler for C supports arguments -mavx512f: YES 00:02:08.868 Checking if "AVX512 checking" compiles: YES 00:02:08.868 Fetching value of define "__SSE4_2__" : 1 00:02:08.868 Fetching value of define "__AES__" : 1 00:02:08.868 Fetching value of define "__AVX__" : 1 00:02:08.868 Fetching value of define "__AVX2__" : 1 00:02:08.868 Fetching value of define "__AVX512BW__" : 1 00:02:08.868 Fetching value of define "__AVX512CD__" : 1 00:02:08.868 Fetching value of define "__AVX512DQ__" : 1 00:02:08.868 Fetching value of define "__AVX512F__" : 1 00:02:08.868 Fetching value of define "__AVX512VL__" : 1 00:02:08.868 Fetching value of define "__PCLMUL__" : 1 00:02:08.868 Fetching value of define "__RDRND__" : 1 00:02:08.868 Fetching value of define "__RDSEED__" : 1 00:02:08.868 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:08.868 Fetching value of define "__znver1__" : (undefined) 00:02:08.868 Fetching value of define "__znver2__" : (undefined) 00:02:08.868 Fetching value of define "__znver3__" : (undefined) 00:02:08.868 Fetching value of define "__znver4__" : (undefined) 00:02:08.868 Library asan found: YES 00:02:08.868 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:08.868 Message: lib/log: Defining dependency "log" 00:02:08.868 Message: lib/kvargs: Defining dependency "kvargs" 00:02:08.868 Message: lib/telemetry: Defining dependency "telemetry" 00:02:08.868 Library rt found: YES 00:02:08.868 Checking for function "getentropy" : NO 00:02:08.868 Message: lib/eal: Defining dependency "eal" 00:02:08.868 Message: lib/ring: Defining dependency "ring" 00:02:08.868 Message: lib/rcu: Defining dependency "rcu" 00:02:08.868 Message: lib/mempool: Defining dependency "mempool" 00:02:08.868 Message: lib/mbuf: Defining dependency "mbuf" 00:02:08.868 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:08.868 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:08.868 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:08.868 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:08.868 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:08.868 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:08.868 Compiler for C supports arguments -mpclmul: YES 00:02:08.868 Compiler for C supports arguments -maes: YES 00:02:08.868 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:08.868 Compiler for C supports arguments -mavx512bw: YES 00:02:08.868 Compiler for C supports arguments -mavx512dq: YES 00:02:08.868 Compiler for C supports arguments -mavx512vl: YES 00:02:08.868 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:08.868 Compiler for C supports arguments -mavx2: YES 00:02:08.868 Compiler for C supports arguments -mavx: YES 00:02:08.868 Message: lib/net: Defining dependency "net" 00:02:08.868 Message: lib/meter: Defining dependency "meter" 00:02:08.868 Message: lib/ethdev: Defining dependency "ethdev" 00:02:08.868 Message: lib/pci: Defining dependency "pci" 00:02:08.868 Message: lib/cmdline: Defining dependency "cmdline" 00:02:08.868 Message: lib/hash: Defining dependency "hash" 00:02:08.868 Message: lib/timer: Defining dependency "timer" 00:02:08.868 Message: lib/compressdev: Defining dependency "compressdev" 00:02:08.868 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:08.868 Message: lib/dmadev: Defining dependency "dmadev" 00:02:08.868 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:08.868 Message: lib/power: Defining dependency "power" 00:02:08.868 Message: lib/reorder: Defining dependency "reorder" 00:02:08.868 Message: lib/security: Defining dependency "security" 00:02:08.868 Has header "linux/userfaultfd.h" : YES 00:02:08.868 Has header "linux/vduse.h" : YES 00:02:08.868 Message: lib/vhost: Defining dependency "vhost" 00:02:08.868 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:08.868 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:08.868 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:08.868 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:08.868 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:08.868 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:08.868 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:08.868 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:08.868 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:08.868 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:08.868 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:08.868 Configuring doxy-api-html.conf using configuration 00:02:08.868 Configuring doxy-api-man.conf using configuration 00:02:08.868 Program mandb found: YES (/usr/bin/mandb) 00:02:08.868 Program sphinx-build found: NO 00:02:08.868 Configuring rte_build_config.h using configuration 00:02:08.868 Message: 00:02:08.868 ================= 00:02:08.868 Applications Enabled 00:02:08.868 ================= 00:02:08.868 00:02:08.868 apps: 00:02:08.868 00:02:08.868 00:02:08.868 Message: 00:02:08.868 ================= 00:02:08.868 Libraries Enabled 00:02:08.868 ================= 00:02:08.868 00:02:08.868 libs: 00:02:08.868 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:08.868 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:08.868 cryptodev, dmadev, power, reorder, security, vhost, 00:02:08.868 00:02:08.868 Message: 00:02:08.868 =============== 00:02:08.868 Drivers Enabled 00:02:08.869 =============== 00:02:08.869 00:02:08.869 common: 00:02:08.869 00:02:08.869 bus: 00:02:08.869 pci, vdev, 00:02:08.869 mempool: 00:02:08.869 ring, 00:02:08.869 dma: 00:02:08.869 00:02:08.869 net: 00:02:08.869 00:02:08.869 crypto: 00:02:08.869 00:02:08.869 compress: 00:02:08.869 00:02:08.869 vdpa: 00:02:08.869 00:02:08.869 00:02:08.869 Message: 00:02:08.869 ================= 00:02:08.869 Content Skipped 00:02:08.869 ================= 00:02:08.869 00:02:08.869 apps: 00:02:08.869 dumpcap: explicitly disabled via build config 00:02:08.869 graph: explicitly disabled via build config 00:02:08.869 pdump: explicitly disabled via build config 00:02:08.869 proc-info: explicitly disabled via build config 00:02:08.869 test-acl: explicitly disabled via build config 00:02:08.869 test-bbdev: explicitly disabled via build config 00:02:08.869 test-cmdline: explicitly disabled via build config 00:02:08.869 test-compress-perf: explicitly disabled via build config 00:02:08.869 test-crypto-perf: explicitly disabled via build config 00:02:08.869 test-dma-perf: explicitly disabled via build config 00:02:08.869 test-eventdev: explicitly disabled via build config 00:02:08.869 test-fib: explicitly disabled via build config 00:02:08.869 test-flow-perf: explicitly disabled via build config 00:02:08.869 test-gpudev: explicitly disabled via build config 00:02:08.869 test-mldev: explicitly disabled via build config 00:02:08.869 test-pipeline: explicitly disabled via build config 00:02:08.869 test-pmd: explicitly disabled via build config 00:02:08.869 test-regex: explicitly disabled via build config 00:02:08.869 test-sad: explicitly disabled via build config 00:02:08.869 test-security-perf: explicitly disabled via build config 00:02:08.869 00:02:08.869 libs: 00:02:08.869 argparse: explicitly disabled via build config 00:02:08.869 metrics: explicitly disabled via build config 00:02:08.869 acl: explicitly disabled via build config 00:02:08.869 bbdev: explicitly disabled via build config 00:02:08.869 bitratestats: explicitly disabled via build config 00:02:08.869 bpf: explicitly disabled via build config 00:02:08.869 cfgfile: explicitly disabled via build config 00:02:08.869 distributor: explicitly disabled via build config 00:02:08.869 efd: explicitly disabled via build config 00:02:08.869 eventdev: explicitly disabled via build config 00:02:08.869 dispatcher: explicitly disabled via build config 00:02:08.869 gpudev: explicitly disabled via build config 00:02:08.869 gro: explicitly disabled via build config 00:02:08.869 gso: explicitly disabled via build config 00:02:08.869 ip_frag: explicitly disabled via build config 00:02:08.869 jobstats: explicitly disabled via build config 00:02:08.869 latencystats: explicitly disabled via build config 00:02:08.869 lpm: explicitly disabled via build config 00:02:08.869 member: explicitly disabled via build config 00:02:08.869 pcapng: explicitly disabled via build config 00:02:08.869 rawdev: explicitly disabled via build config 00:02:08.869 regexdev: explicitly disabled via build config 00:02:08.869 mldev: explicitly disabled via build config 00:02:08.869 rib: explicitly disabled via build config 00:02:08.869 sched: explicitly disabled via build config 00:02:08.869 stack: explicitly disabled via build config 00:02:08.869 ipsec: explicitly disabled via build config 00:02:08.869 pdcp: explicitly disabled via build config 00:02:08.869 fib: explicitly disabled via build config 00:02:08.869 port: explicitly disabled via build config 00:02:08.869 pdump: explicitly disabled via build config 00:02:08.869 table: explicitly disabled via build config 00:02:08.869 pipeline: explicitly disabled via build config 00:02:08.869 graph: explicitly disabled via build config 00:02:08.869 node: explicitly disabled via build config 00:02:08.869 00:02:08.869 drivers: 00:02:08.869 common/cpt: not in enabled drivers build config 00:02:08.869 common/dpaax: not in enabled drivers build config 00:02:08.869 common/iavf: not in enabled drivers build config 00:02:08.869 common/idpf: not in enabled drivers build config 00:02:08.869 common/ionic: not in enabled drivers build config 00:02:08.869 common/mvep: not in enabled drivers build config 00:02:08.869 common/octeontx: not in enabled drivers build config 00:02:08.869 bus/auxiliary: not in enabled drivers build config 00:02:08.869 bus/cdx: not in enabled drivers build config 00:02:08.869 bus/dpaa: not in enabled drivers build config 00:02:08.869 bus/fslmc: not in enabled drivers build config 00:02:08.869 bus/ifpga: not in enabled drivers build config 00:02:08.869 bus/platform: not in enabled drivers build config 00:02:08.869 bus/uacce: not in enabled drivers build config 00:02:08.869 bus/vmbus: not in enabled drivers build config 00:02:08.869 common/cnxk: not in enabled drivers build config 00:02:08.869 common/mlx5: not in enabled drivers build config 00:02:08.869 common/nfp: not in enabled drivers build config 00:02:08.869 common/nitrox: not in enabled drivers build config 00:02:08.869 common/qat: not in enabled drivers build config 00:02:08.869 common/sfc_efx: not in enabled drivers build config 00:02:08.869 mempool/bucket: not in enabled drivers build config 00:02:08.869 mempool/cnxk: not in enabled drivers build config 00:02:08.869 mempool/dpaa: not in enabled drivers build config 00:02:08.869 mempool/dpaa2: not in enabled drivers build config 00:02:08.869 mempool/octeontx: not in enabled drivers build config 00:02:08.869 mempool/stack: not in enabled drivers build config 00:02:08.869 dma/cnxk: not in enabled drivers build config 00:02:08.869 dma/dpaa: not in enabled drivers build config 00:02:08.869 dma/dpaa2: not in enabled drivers build config 00:02:08.869 dma/hisilicon: not in enabled drivers build config 00:02:08.869 dma/idxd: not in enabled drivers build config 00:02:08.869 dma/ioat: not in enabled drivers build config 00:02:08.869 dma/skeleton: not in enabled drivers build config 00:02:08.869 net/af_packet: not in enabled drivers build config 00:02:08.869 net/af_xdp: not in enabled drivers build config 00:02:08.869 net/ark: not in enabled drivers build config 00:02:08.869 net/atlantic: not in enabled drivers build config 00:02:08.869 net/avp: not in enabled drivers build config 00:02:08.869 net/axgbe: not in enabled drivers build config 00:02:08.869 net/bnx2x: not in enabled drivers build config 00:02:08.869 net/bnxt: not in enabled drivers build config 00:02:08.869 net/bonding: not in enabled drivers build config 00:02:08.869 net/cnxk: not in enabled drivers build config 00:02:08.869 net/cpfl: not in enabled drivers build config 00:02:08.869 net/cxgbe: not in enabled drivers build config 00:02:08.869 net/dpaa: not in enabled drivers build config 00:02:08.869 net/dpaa2: not in enabled drivers build config 00:02:08.869 net/e1000: not in enabled drivers build config 00:02:08.869 net/ena: not in enabled drivers build config 00:02:08.869 net/enetc: not in enabled drivers build config 00:02:08.869 net/enetfec: not in enabled drivers build config 00:02:08.869 net/enic: not in enabled drivers build config 00:02:08.869 net/failsafe: not in enabled drivers build config 00:02:08.870 net/fm10k: not in enabled drivers build config 00:02:08.870 net/gve: not in enabled drivers build config 00:02:08.870 net/hinic: not in enabled drivers build config 00:02:08.870 net/hns3: not in enabled drivers build config 00:02:08.870 net/i40e: not in enabled drivers build config 00:02:08.870 net/iavf: not in enabled drivers build config 00:02:08.870 net/ice: not in enabled drivers build config 00:02:08.870 net/idpf: not in enabled drivers build config 00:02:08.870 net/igc: not in enabled drivers build config 00:02:08.870 net/ionic: not in enabled drivers build config 00:02:08.870 net/ipn3ke: not in enabled drivers build config 00:02:08.870 net/ixgbe: not in enabled drivers build config 00:02:08.870 net/mana: not in enabled drivers build config 00:02:08.870 net/memif: not in enabled drivers build config 00:02:08.870 net/mlx4: not in enabled drivers build config 00:02:08.870 net/mlx5: not in enabled drivers build config 00:02:08.870 net/mvneta: not in enabled drivers build config 00:02:08.870 net/mvpp2: not in enabled drivers build config 00:02:08.870 net/netvsc: not in enabled drivers build config 00:02:08.870 net/nfb: not in enabled drivers build config 00:02:08.870 net/nfp: not in enabled drivers build config 00:02:08.870 net/ngbe: not in enabled drivers build config 00:02:08.870 net/null: not in enabled drivers build config 00:02:08.870 net/octeontx: not in enabled drivers build config 00:02:08.870 net/octeon_ep: not in enabled drivers build config 00:02:08.870 net/pcap: not in enabled drivers build config 00:02:08.870 net/pfe: not in enabled drivers build config 00:02:08.870 net/qede: not in enabled drivers build config 00:02:08.870 net/ring: not in enabled drivers build config 00:02:08.870 net/sfc: not in enabled drivers build config 00:02:08.870 net/softnic: not in enabled drivers build config 00:02:08.870 net/tap: not in enabled drivers build config 00:02:08.870 net/thunderx: not in enabled drivers build config 00:02:08.870 net/txgbe: not in enabled drivers build config 00:02:08.870 net/vdev_netvsc: not in enabled drivers build config 00:02:08.870 net/vhost: not in enabled drivers build config 00:02:08.870 net/virtio: not in enabled drivers build config 00:02:08.870 net/vmxnet3: not in enabled drivers build config 00:02:08.870 raw/*: missing internal dependency, "rawdev" 00:02:08.870 crypto/armv8: not in enabled drivers build config 00:02:08.870 crypto/bcmfs: not in enabled drivers build config 00:02:08.870 crypto/caam_jr: not in enabled drivers build config 00:02:08.870 crypto/ccp: not in enabled drivers build config 00:02:08.870 crypto/cnxk: not in enabled drivers build config 00:02:08.870 crypto/dpaa_sec: not in enabled drivers build config 00:02:08.870 crypto/dpaa2_sec: not in enabled drivers build config 00:02:08.870 crypto/ipsec_mb: not in enabled drivers build config 00:02:08.870 crypto/mlx5: not in enabled drivers build config 00:02:08.870 crypto/mvsam: not in enabled drivers build config 00:02:08.870 crypto/nitrox: not in enabled drivers build config 00:02:08.870 crypto/null: not in enabled drivers build config 00:02:08.870 crypto/octeontx: not in enabled drivers build config 00:02:08.870 crypto/openssl: not in enabled drivers build config 00:02:08.870 crypto/scheduler: not in enabled drivers build config 00:02:08.870 crypto/uadk: not in enabled drivers build config 00:02:08.870 crypto/virtio: not in enabled drivers build config 00:02:08.870 compress/isal: not in enabled drivers build config 00:02:08.870 compress/mlx5: not in enabled drivers build config 00:02:08.870 compress/nitrox: not in enabled drivers build config 00:02:08.870 compress/octeontx: not in enabled drivers build config 00:02:08.870 compress/zlib: not in enabled drivers build config 00:02:08.870 regex/*: missing internal dependency, "regexdev" 00:02:08.870 ml/*: missing internal dependency, "mldev" 00:02:08.870 vdpa/ifc: not in enabled drivers build config 00:02:08.870 vdpa/mlx5: not in enabled drivers build config 00:02:08.870 vdpa/nfp: not in enabled drivers build config 00:02:08.870 vdpa/sfc: not in enabled drivers build config 00:02:08.870 event/*: missing internal dependency, "eventdev" 00:02:08.870 baseband/*: missing internal dependency, "bbdev" 00:02:08.870 gpu/*: missing internal dependency, "gpudev" 00:02:08.870 00:02:08.870 00:02:08.870 Build targets in project: 85 00:02:08.870 00:02:08.870 DPDK 24.03.0 00:02:08.870 00:02:08.870 User defined options 00:02:08.870 buildtype : debug 00:02:08.870 default_library : shared 00:02:08.870 libdir : lib 00:02:08.870 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:08.870 b_sanitize : address 00:02:08.870 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:08.870 c_link_args : 00:02:08.870 cpu_instruction_set: native 00:02:08.870 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:08.870 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:08.870 enable_docs : false 00:02:08.870 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:08.870 enable_kmods : false 00:02:08.870 max_lcores : 128 00:02:08.870 tests : false 00:02:08.870 00:02:08.870 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:08.870 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:08.870 [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:08.870 [2/268] Linking static target lib/librte_kvargs.a 00:02:08.870 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:08.870 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:08.870 [5/268] Linking static target lib/librte_log.a 00:02:08.870 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:09.131 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.131 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:09.131 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:09.131 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:09.131 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:09.131 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:09.131 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:09.131 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:09.131 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:09.391 [16/268] Linking static target lib/librte_telemetry.a 00:02:09.391 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:09.391 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:09.650 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.650 [20/268] Linking target lib/librte_log.so.24.1 00:02:09.650 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:09.650 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:09.910 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:09.910 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:09.910 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:09.910 [26/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:09.910 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:09.910 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:10.169 [29/268] Linking target lib/librte_kvargs.so.24.1 00:02:10.169 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:10.169 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.169 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:10.169 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:10.169 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:10.447 [35/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:10.447 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:10.447 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:10.447 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:10.447 [39/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:10.710 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:10.710 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:10.710 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:10.710 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:10.710 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:10.970 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:10.970 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:10.970 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:10.970 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:10.970 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:11.229 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:11.229 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:11.229 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:11.229 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:11.488 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:11.488 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:11.488 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:11.488 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:11.488 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:11.748 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:11.748 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:11.748 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:11.748 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:12.008 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:12.008 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:12.008 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:12.008 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:12.267 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:12.267 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:12.267 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:12.267 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:12.528 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:12.528 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:12.528 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:12.528 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:12.528 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:12.528 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:12.787 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:12.787 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:13.046 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:13.046 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:13.046 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:13.046 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:13.046 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:13.305 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:13.305 [85/268] Linking static target lib/librte_ring.a 00:02:13.305 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:13.305 [87/268] Linking static target lib/librte_eal.a 00:02:13.305 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:13.305 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:13.565 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:13.565 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:13.565 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:13.565 [93/268] Linking static target lib/librte_mempool.a 00:02:13.565 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:13.565 [95/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:13.825 [96/268] Linking static target lib/librte_rcu.a 00:02:13.826 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.086 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:14.086 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:14.086 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:14.086 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:14.086 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:14.345 [103/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.345 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:14.345 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:14.604 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:14.604 [107/268] Linking static target lib/librte_net.a 00:02:14.604 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:14.604 [109/268] Linking static target lib/librte_meter.a 00:02:14.604 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:14.864 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:14.864 [112/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:14.864 [113/268] Linking static target lib/librte_mbuf.a 00:02:14.864 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:14.864 [115/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.864 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.123 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.123 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:15.381 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:15.381 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:15.640 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:15.640 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:15.899 [123/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.899 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:15.899 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:15.899 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:15.899 [127/268] Linking static target lib/librte_pci.a 00:02:16.159 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:16.159 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:16.159 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:16.416 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:16.416 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:16.416 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:16.416 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:16.416 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:16.416 [136/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.416 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:16.416 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:16.416 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:16.416 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:16.416 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:16.416 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:16.674 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:16.674 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:16.674 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:16.674 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:16.674 [147/268] Linking static target lib/librte_cmdline.a 00:02:16.674 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:16.934 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:17.193 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:17.193 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:17.193 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:17.193 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:17.454 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:17.454 [155/268] Linking static target lib/librte_timer.a 00:02:17.713 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:17.713 [157/268] Linking static target lib/librte_compressdev.a 00:02:17.713 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:18.021 [159/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:18.021 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:18.021 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:18.021 [162/268] Linking static target lib/librte_hash.a 00:02:18.021 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:18.021 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.329 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:18.329 [166/268] Linking static target lib/librte_dmadev.a 00:02:18.329 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:18.329 [168/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.588 [169/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:18.588 [170/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:18.588 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:18.588 [172/268] Linking static target lib/librte_ethdev.a 00:02:18.588 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:18.588 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.847 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:19.106 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:19.106 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:19.106 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:19.106 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:19.106 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.106 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:19.106 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:19.106 [183/268] Linking static target lib/librte_cryptodev.a 00:02:19.364 [184/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.364 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:19.623 [186/268] Linking static target lib/librte_power.a 00:02:19.623 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:19.623 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:19.882 [189/268] Linking static target lib/librte_reorder.a 00:02:19.882 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:19.882 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:19.882 [192/268] Linking static target lib/librte_security.a 00:02:19.882 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:20.451 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:20.451 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.709 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.709 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.709 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:20.709 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:20.968 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:20.968 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:21.227 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:21.227 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:21.227 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:21.486 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:21.486 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:21.486 [207/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.746 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:21.746 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:21.746 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:21.746 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:22.004 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:22.004 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:22.004 [214/268] Linking static target drivers/librte_bus_vdev.a 00:02:22.004 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:22.004 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:22.004 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:22.004 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:22.004 [219/268] Linking static target drivers/librte_bus_pci.a 00:02:22.004 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:22.004 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:22.263 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:22.263 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.263 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:22.263 [225/268] Linking static target drivers/librte_mempool_ring.a 00:02:22.263 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:22.521 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.478 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:24.432 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.432 [230/268] Linking target lib/librte_eal.so.24.1 00:02:24.691 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:24.691 [232/268] Linking target lib/librte_meter.so.24.1 00:02:24.691 [233/268] Linking target lib/librte_pci.so.24.1 00:02:24.691 [234/268] Linking target lib/librte_ring.so.24.1 00:02:24.691 [235/268] Linking target lib/librte_timer.so.24.1 00:02:24.691 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:24.691 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:24.691 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:24.691 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:24.950 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:24.950 [241/268] Linking target lib/librte_rcu.so.24.1 00:02:24.950 [242/268] Linking target lib/librte_mempool.so.24.1 00:02:24.950 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:24.950 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:24.950 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:24.950 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:24.951 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:24.951 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:24.951 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:25.210 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:25.210 [251/268] Linking target lib/librte_cryptodev.so.24.1 00:02:25.210 [252/268] Linking target lib/librte_compressdev.so.24.1 00:02:25.210 [253/268] Linking target lib/librte_net.so.24.1 00:02:25.210 [254/268] Linking target lib/librte_reorder.so.24.1 00:02:25.469 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:25.469 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:25.469 [257/268] Linking target lib/librte_security.so.24.1 00:02:25.469 [258/268] Linking target lib/librte_hash.so.24.1 00:02:25.469 [259/268] Linking target lib/librte_cmdline.so.24.1 00:02:25.469 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:26.845 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.103 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:27.103 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:27.103 [264/268] Linking target lib/librte_power.so.24.1 00:02:28.483 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:28.483 [266/268] Linking static target lib/librte_vhost.a 00:02:31.018 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.018 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:31.018 INFO: autodetecting backend as ninja 00:02:31.018 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:49.144 CC lib/log/log_flags.o 00:02:49.144 CC lib/log/log_deprecated.o 00:02:49.144 CC lib/log/log.o 00:02:49.144 CC lib/ut_mock/mock.o 00:02:49.144 CC lib/ut/ut.o 00:02:49.144 LIB libspdk_log.a 00:02:49.144 LIB libspdk_ut_mock.a 00:02:49.144 LIB libspdk_ut.a 00:02:49.144 SO libspdk_ut_mock.so.6.0 00:02:49.144 SO libspdk_log.so.7.1 00:02:49.144 SO libspdk_ut.so.2.0 00:02:49.144 SYMLINK libspdk_ut_mock.so 00:02:49.144 SYMLINK libspdk_log.so 00:02:49.144 SYMLINK libspdk_ut.so 00:02:49.144 CC lib/util/base64.o 00:02:49.144 CC lib/util/bit_array.o 00:02:49.144 CC lib/util/cpuset.o 00:02:49.144 CC lib/util/crc16.o 00:02:49.144 CC lib/util/crc32.o 00:02:49.144 CC lib/util/crc32c.o 00:02:49.144 CC lib/ioat/ioat.o 00:02:49.144 CXX lib/trace_parser/trace.o 00:02:49.144 CC lib/dma/dma.o 00:02:49.144 CC lib/util/crc32_ieee.o 00:02:49.144 CC lib/util/crc64.o 00:02:49.144 CC lib/vfio_user/host/vfio_user_pci.o 00:02:49.144 CC lib/util/dif.o 00:02:49.144 CC lib/util/fd.o 00:02:49.144 CC lib/util/fd_group.o 00:02:49.144 LIB libspdk_dma.a 00:02:49.144 CC lib/util/file.o 00:02:49.144 CC lib/util/hexlify.o 00:02:49.144 SO libspdk_dma.so.5.0 00:02:49.144 CC lib/util/iov.o 00:02:49.144 LIB libspdk_ioat.a 00:02:49.144 SYMLINK libspdk_dma.so 00:02:49.144 CC lib/util/math.o 00:02:49.144 CC lib/vfio_user/host/vfio_user.o 00:02:49.144 SO libspdk_ioat.so.7.0 00:02:49.144 CC lib/util/net.o 00:02:49.144 SYMLINK libspdk_ioat.so 00:02:49.144 CC lib/util/pipe.o 00:02:49.144 CC lib/util/strerror_tls.o 00:02:49.144 CC lib/util/string.o 00:02:49.144 CC lib/util/uuid.o 00:02:49.144 CC lib/util/xor.o 00:02:49.144 CC lib/util/zipf.o 00:02:49.144 CC lib/util/md5.o 00:02:49.144 LIB libspdk_vfio_user.a 00:02:49.144 SO libspdk_vfio_user.so.5.0 00:02:49.144 SYMLINK libspdk_vfio_user.so 00:02:49.402 LIB libspdk_util.a 00:02:49.662 SO libspdk_util.so.10.1 00:02:49.662 LIB libspdk_trace_parser.a 00:02:49.662 SO libspdk_trace_parser.so.6.0 00:02:49.662 SYMLINK libspdk_util.so 00:02:49.662 SYMLINK libspdk_trace_parser.so 00:02:49.923 CC lib/vmd/vmd.o 00:02:49.923 CC lib/vmd/led.o 00:02:49.923 CC lib/json/json_parse.o 00:02:49.923 CC lib/json/json_write.o 00:02:49.923 CC lib/json/json_util.o 00:02:49.923 CC lib/env_dpdk/memory.o 00:02:49.923 CC lib/conf/conf.o 00:02:49.923 CC lib/env_dpdk/env.o 00:02:49.923 CC lib/idxd/idxd.o 00:02:49.923 CC lib/rdma_utils/rdma_utils.o 00:02:49.923 CC lib/env_dpdk/pci.o 00:02:50.183 LIB libspdk_conf.a 00:02:50.183 CC lib/idxd/idxd_user.o 00:02:50.183 SO libspdk_conf.so.6.0 00:02:50.183 CC lib/idxd/idxd_kernel.o 00:02:50.183 LIB libspdk_json.a 00:02:50.183 LIB libspdk_rdma_utils.a 00:02:50.183 SYMLINK libspdk_conf.so 00:02:50.183 CC lib/env_dpdk/init.o 00:02:50.183 SO libspdk_json.so.6.0 00:02:50.183 SO libspdk_rdma_utils.so.1.0 00:02:50.183 SYMLINK libspdk_rdma_utils.so 00:02:50.183 SYMLINK libspdk_json.so 00:02:50.183 CC lib/env_dpdk/threads.o 00:02:50.183 CC lib/env_dpdk/pci_ioat.o 00:02:50.451 CC lib/env_dpdk/pci_virtio.o 00:02:50.451 CC lib/env_dpdk/pci_vmd.o 00:02:50.451 CC lib/env_dpdk/pci_idxd.o 00:02:50.451 CC lib/env_dpdk/pci_event.o 00:02:50.451 CC lib/env_dpdk/sigbus_handler.o 00:02:50.717 CC lib/env_dpdk/pci_dpdk.o 00:02:50.717 CC lib/jsonrpc/jsonrpc_server.o 00:02:50.717 CC lib/rdma_provider/common.o 00:02:50.717 LIB libspdk_vmd.a 00:02:50.717 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:50.717 SO libspdk_vmd.so.6.0 00:02:50.717 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:50.717 LIB libspdk_idxd.a 00:02:50.717 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:50.717 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:50.717 SYMLINK libspdk_vmd.so 00:02:50.717 SO libspdk_idxd.so.12.1 00:02:50.717 CC lib/jsonrpc/jsonrpc_client.o 00:02:50.717 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:50.717 SYMLINK libspdk_idxd.so 00:02:50.975 LIB libspdk_rdma_provider.a 00:02:50.975 SO libspdk_rdma_provider.so.7.0 00:02:50.975 SYMLINK libspdk_rdma_provider.so 00:02:50.975 LIB libspdk_jsonrpc.a 00:02:50.975 SO libspdk_jsonrpc.so.6.0 00:02:51.232 SYMLINK libspdk_jsonrpc.so 00:02:51.491 CC lib/rpc/rpc.o 00:02:51.752 LIB libspdk_env_dpdk.a 00:02:51.752 LIB libspdk_rpc.a 00:02:51.752 SO libspdk_env_dpdk.so.15.1 00:02:51.752 SO libspdk_rpc.so.6.0 00:02:52.011 SYMLINK libspdk_rpc.so 00:02:52.011 SYMLINK libspdk_env_dpdk.so 00:02:52.269 CC lib/trace/trace_flags.o 00:02:52.269 CC lib/trace/trace.o 00:02:52.269 CC lib/trace/trace_rpc.o 00:02:52.269 CC lib/keyring/keyring.o 00:02:52.269 CC lib/keyring/keyring_rpc.o 00:02:52.269 CC lib/notify/notify_rpc.o 00:02:52.269 CC lib/notify/notify.o 00:02:52.529 LIB libspdk_notify.a 00:02:52.529 SO libspdk_notify.so.6.0 00:02:52.529 LIB libspdk_keyring.a 00:02:52.529 LIB libspdk_trace.a 00:02:52.529 SO libspdk_keyring.so.2.0 00:02:52.529 SO libspdk_trace.so.11.0 00:02:52.529 SYMLINK libspdk_notify.so 00:02:52.789 SYMLINK libspdk_trace.so 00:02:52.790 SYMLINK libspdk_keyring.so 00:02:53.050 CC lib/sock/sock.o 00:02:53.050 CC lib/sock/sock_rpc.o 00:02:53.050 CC lib/thread/thread.o 00:02:53.050 CC lib/thread/iobuf.o 00:02:53.619 LIB libspdk_sock.a 00:02:53.620 SO libspdk_sock.so.10.0 00:02:53.620 SYMLINK libspdk_sock.so 00:02:54.187 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:54.187 CC lib/nvme/nvme_ctrlr.o 00:02:54.187 CC lib/nvme/nvme_fabric.o 00:02:54.187 CC lib/nvme/nvme_ns_cmd.o 00:02:54.187 CC lib/nvme/nvme_ns.o 00:02:54.187 CC lib/nvme/nvme_pcie_common.o 00:02:54.187 CC lib/nvme/nvme_pcie.o 00:02:54.187 CC lib/nvme/nvme_qpair.o 00:02:54.187 CC lib/nvme/nvme.o 00:02:54.756 CC lib/nvme/nvme_quirks.o 00:02:54.756 CC lib/nvme/nvme_transport.o 00:02:55.016 LIB libspdk_thread.a 00:02:55.016 CC lib/nvme/nvme_discovery.o 00:02:55.016 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:55.016 SO libspdk_thread.so.11.0 00:02:55.016 SYMLINK libspdk_thread.so 00:02:55.016 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:55.016 CC lib/nvme/nvme_tcp.o 00:02:55.016 CC lib/nvme/nvme_opal.o 00:02:55.016 CC lib/nvme/nvme_io_msg.o 00:02:55.276 CC lib/nvme/nvme_poll_group.o 00:02:55.276 CC lib/nvme/nvme_zns.o 00:02:55.535 CC lib/nvme/nvme_stubs.o 00:02:55.535 CC lib/nvme/nvme_auth.o 00:02:55.535 CC lib/accel/accel.o 00:02:55.535 CC lib/blob/blobstore.o 00:02:55.535 CC lib/nvme/nvme_cuse.o 00:02:55.535 CC lib/nvme/nvme_rdma.o 00:02:55.805 CC lib/blob/request.o 00:02:55.805 CC lib/blob/zeroes.o 00:02:56.065 CC lib/accel/accel_rpc.o 00:02:56.065 CC lib/accel/accel_sw.o 00:02:56.325 CC lib/blob/blob_bs_dev.o 00:02:56.584 CC lib/init/json_config.o 00:02:56.584 CC lib/init/subsystem.o 00:02:56.584 CC lib/init/subsystem_rpc.o 00:02:56.584 CC lib/init/rpc.o 00:02:56.584 CC lib/virtio/virtio.o 00:02:56.844 CC lib/virtio/virtio_vhost_user.o 00:02:56.844 CC lib/virtio/virtio_vfio_user.o 00:02:56.844 CC lib/virtio/virtio_pci.o 00:02:56.844 CC lib/fsdev/fsdev.o 00:02:56.844 LIB libspdk_init.a 00:02:56.844 CC lib/fsdev/fsdev_io.o 00:02:56.844 SO libspdk_init.so.6.0 00:02:56.844 LIB libspdk_accel.a 00:02:56.844 SYMLINK libspdk_init.so 00:02:56.844 CC lib/fsdev/fsdev_rpc.o 00:02:56.844 SO libspdk_accel.so.16.0 00:02:56.844 SYMLINK libspdk_accel.so 00:02:57.103 LIB libspdk_virtio.a 00:02:57.103 SO libspdk_virtio.so.7.0 00:02:57.103 CC lib/bdev/bdev.o 00:02:57.103 CC lib/bdev/bdev_zone.o 00:02:57.103 CC lib/bdev/scsi_nvme.o 00:02:57.103 CC lib/bdev/bdev_rpc.o 00:02:57.103 CC lib/bdev/part.o 00:02:57.103 CC lib/event/app.o 00:02:57.103 SYMLINK libspdk_virtio.so 00:02:57.103 CC lib/event/reactor.o 00:02:57.362 LIB libspdk_nvme.a 00:02:57.362 CC lib/event/log_rpc.o 00:02:57.362 CC lib/event/app_rpc.o 00:02:57.362 LIB libspdk_fsdev.a 00:02:57.622 SO libspdk_fsdev.so.2.0 00:02:57.622 SO libspdk_nvme.so.15.0 00:02:57.622 CC lib/event/scheduler_static.o 00:02:57.622 SYMLINK libspdk_fsdev.so 00:02:57.881 LIB libspdk_event.a 00:02:57.881 SYMLINK libspdk_nvme.so 00:02:57.881 SO libspdk_event.so.14.0 00:02:57.881 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:57.881 SYMLINK libspdk_event.so 00:02:58.819 LIB libspdk_fuse_dispatcher.a 00:02:58.819 SO libspdk_fuse_dispatcher.so.1.0 00:02:58.819 SYMLINK libspdk_fuse_dispatcher.so 00:02:59.755 LIB libspdk_blob.a 00:02:59.755 SO libspdk_blob.so.12.0 00:02:59.755 SYMLINK libspdk_blob.so 00:03:00.320 CC lib/blobfs/blobfs.o 00:03:00.320 CC lib/blobfs/tree.o 00:03:00.320 CC lib/lvol/lvol.o 00:03:00.580 LIB libspdk_bdev.a 00:03:00.580 SO libspdk_bdev.so.17.0 00:03:00.839 SYMLINK libspdk_bdev.so 00:03:01.098 CC lib/scsi/dev.o 00:03:01.098 CC lib/scsi/lun.o 00:03:01.098 CC lib/scsi/port.o 00:03:01.098 CC lib/scsi/scsi.o 00:03:01.098 CC lib/ublk/ublk.o 00:03:01.098 CC lib/ftl/ftl_core.o 00:03:01.098 CC lib/nbd/nbd.o 00:03:01.098 CC lib/nvmf/ctrlr.o 00:03:01.098 CC lib/nvmf/ctrlr_discovery.o 00:03:01.098 CC lib/ublk/ublk_rpc.o 00:03:01.098 LIB libspdk_blobfs.a 00:03:01.356 SO libspdk_blobfs.so.11.0 00:03:01.356 CC lib/scsi/scsi_bdev.o 00:03:01.356 SYMLINK libspdk_blobfs.so 00:03:01.356 CC lib/scsi/scsi_pr.o 00:03:01.356 CC lib/nbd/nbd_rpc.o 00:03:01.356 CC lib/nvmf/ctrlr_bdev.o 00:03:01.356 LIB libspdk_lvol.a 00:03:01.356 CC lib/ftl/ftl_init.o 00:03:01.616 CC lib/ftl/ftl_layout.o 00:03:01.616 SO libspdk_lvol.so.11.0 00:03:01.616 LIB libspdk_nbd.a 00:03:01.616 SYMLINK libspdk_lvol.so 00:03:01.616 CC lib/scsi/scsi_rpc.o 00:03:01.616 SO libspdk_nbd.so.7.0 00:03:01.616 SYMLINK libspdk_nbd.so 00:03:01.616 CC lib/scsi/task.o 00:03:01.616 CC lib/ftl/ftl_debug.o 00:03:01.875 CC lib/ftl/ftl_io.o 00:03:01.875 CC lib/ftl/ftl_sb.o 00:03:01.875 CC lib/nvmf/subsystem.o 00:03:01.875 LIB libspdk_ublk.a 00:03:01.875 SO libspdk_ublk.so.3.0 00:03:01.875 CC lib/ftl/ftl_l2p.o 00:03:01.875 CC lib/nvmf/nvmf.o 00:03:01.875 LIB libspdk_scsi.a 00:03:01.875 SYMLINK libspdk_ublk.so 00:03:01.875 CC lib/nvmf/nvmf_rpc.o 00:03:01.875 CC lib/nvmf/transport.o 00:03:01.875 CC lib/nvmf/tcp.o 00:03:01.875 SO libspdk_scsi.so.9.0 00:03:02.134 CC lib/ftl/ftl_l2p_flat.o 00:03:02.134 SYMLINK libspdk_scsi.so 00:03:02.134 CC lib/nvmf/stubs.o 00:03:02.134 CC lib/ftl/ftl_nv_cache.o 00:03:02.134 CC lib/nvmf/mdns_server.o 00:03:02.134 CC lib/nvmf/rdma.o 00:03:02.704 CC lib/nvmf/auth.o 00:03:02.704 CC lib/ftl/ftl_band.o 00:03:02.704 CC lib/ftl/ftl_band_ops.o 00:03:02.963 CC lib/iscsi/conn.o 00:03:02.963 CC lib/vhost/vhost.o 00:03:03.222 CC lib/vhost/vhost_rpc.o 00:03:03.222 CC lib/vhost/vhost_scsi.o 00:03:03.222 CC lib/iscsi/init_grp.o 00:03:03.222 CC lib/ftl/ftl_writer.o 00:03:03.222 CC lib/vhost/vhost_blk.o 00:03:03.517 CC lib/vhost/rte_vhost_user.o 00:03:03.517 CC lib/iscsi/iscsi.o 00:03:03.517 CC lib/ftl/ftl_rq.o 00:03:03.807 CC lib/ftl/ftl_reloc.o 00:03:03.807 CC lib/ftl/ftl_l2p_cache.o 00:03:03.807 CC lib/ftl/ftl_p2l.o 00:03:04.066 CC lib/iscsi/param.o 00:03:04.066 CC lib/iscsi/portal_grp.o 00:03:04.066 CC lib/iscsi/tgt_node.o 00:03:04.325 CC lib/iscsi/iscsi_subsystem.o 00:03:04.325 CC lib/iscsi/iscsi_rpc.o 00:03:04.325 CC lib/iscsi/task.o 00:03:04.325 CC lib/ftl/ftl_p2l_log.o 00:03:04.325 CC lib/ftl/mngt/ftl_mngt.o 00:03:04.325 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:04.584 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:04.584 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:04.584 LIB libspdk_vhost.a 00:03:04.584 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:04.584 SO libspdk_vhost.so.8.0 00:03:04.584 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:04.843 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:04.843 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:04.843 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:04.843 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:04.843 SYMLINK libspdk_vhost.so 00:03:04.843 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:04.843 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:04.843 LIB libspdk_nvmf.a 00:03:04.843 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:05.102 CC lib/ftl/utils/ftl_conf.o 00:03:05.102 CC lib/ftl/utils/ftl_md.o 00:03:05.102 CC lib/ftl/utils/ftl_mempool.o 00:03:05.102 CC lib/ftl/utils/ftl_bitmap.o 00:03:05.102 CC lib/ftl/utils/ftl_property.o 00:03:05.102 SO libspdk_nvmf.so.20.0 00:03:05.102 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:05.102 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:05.102 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:05.102 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:05.361 LIB libspdk_iscsi.a 00:03:05.362 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:05.362 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:05.362 SO libspdk_iscsi.so.8.0 00:03:05.362 SYMLINK libspdk_nvmf.so 00:03:05.362 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:05.362 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:05.362 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:05.362 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:05.362 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:05.362 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:05.620 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:05.620 CC lib/ftl/base/ftl_base_dev.o 00:03:05.620 SYMLINK libspdk_iscsi.so 00:03:05.620 CC lib/ftl/base/ftl_base_bdev.o 00:03:05.620 CC lib/ftl/ftl_trace.o 00:03:05.878 LIB libspdk_ftl.a 00:03:06.137 SO libspdk_ftl.so.9.0 00:03:06.397 SYMLINK libspdk_ftl.so 00:03:06.656 CC module/env_dpdk/env_dpdk_rpc.o 00:03:06.656 CC module/accel/dsa/accel_dsa.o 00:03:06.656 CC module/fsdev/aio/fsdev_aio.o 00:03:06.656 CC module/sock/posix/posix.o 00:03:06.656 CC module/keyring/file/keyring.o 00:03:06.656 CC module/accel/error/accel_error.o 00:03:06.656 CC module/keyring/linux/keyring.o 00:03:06.656 CC module/blob/bdev/blob_bdev.o 00:03:06.656 CC module/accel/ioat/accel_ioat.o 00:03:06.656 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:06.915 LIB libspdk_env_dpdk_rpc.a 00:03:06.915 SO libspdk_env_dpdk_rpc.so.6.0 00:03:06.915 CC module/keyring/linux/keyring_rpc.o 00:03:06.915 SYMLINK libspdk_env_dpdk_rpc.so 00:03:06.915 CC module/keyring/file/keyring_rpc.o 00:03:06.915 CC module/accel/error/accel_error_rpc.o 00:03:06.915 CC module/accel/ioat/accel_ioat_rpc.o 00:03:06.915 LIB libspdk_scheduler_dynamic.a 00:03:06.915 SO libspdk_scheduler_dynamic.so.4.0 00:03:06.915 LIB libspdk_keyring_linux.a 00:03:06.915 LIB libspdk_keyring_file.a 00:03:07.173 LIB libspdk_accel_error.a 00:03:07.173 LIB libspdk_blob_bdev.a 00:03:07.173 SYMLINK libspdk_scheduler_dynamic.so 00:03:07.173 SO libspdk_keyring_linux.so.1.0 00:03:07.173 CC module/accel/dsa/accel_dsa_rpc.o 00:03:07.173 SO libspdk_keyring_file.so.2.0 00:03:07.173 SO libspdk_accel_error.so.2.0 00:03:07.173 SO libspdk_blob_bdev.so.12.0 00:03:07.173 LIB libspdk_accel_ioat.a 00:03:07.173 SYMLINK libspdk_keyring_linux.so 00:03:07.173 SYMLINK libspdk_keyring_file.so 00:03:07.173 SYMLINK libspdk_blob_bdev.so 00:03:07.173 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:07.173 CC module/accel/iaa/accel_iaa.o 00:03:07.173 CC module/accel/iaa/accel_iaa_rpc.o 00:03:07.173 SYMLINK libspdk_accel_error.so 00:03:07.173 SO libspdk_accel_ioat.so.6.0 00:03:07.173 SYMLINK libspdk_accel_ioat.so 00:03:07.173 CC module/fsdev/aio/linux_aio_mgr.o 00:03:07.173 LIB libspdk_accel_dsa.a 00:03:07.173 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:07.173 SO libspdk_accel_dsa.so.5.0 00:03:07.433 CC module/scheduler/gscheduler/gscheduler.o 00:03:07.433 LIB libspdk_accel_iaa.a 00:03:07.433 SYMLINK libspdk_accel_dsa.so 00:03:07.433 SO libspdk_accel_iaa.so.3.0 00:03:07.433 LIB libspdk_scheduler_dpdk_governor.a 00:03:07.433 CC module/bdev/delay/vbdev_delay.o 00:03:07.433 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:07.433 SYMLINK libspdk_accel_iaa.so 00:03:07.433 CC module/bdev/error/vbdev_error.o 00:03:07.433 CC module/bdev/gpt/gpt.o 00:03:07.433 LIB libspdk_scheduler_gscheduler.a 00:03:07.433 SO libspdk_scheduler_gscheduler.so.4.0 00:03:07.433 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:07.433 CC module/bdev/gpt/vbdev_gpt.o 00:03:07.433 CC module/bdev/lvol/vbdev_lvol.o 00:03:07.433 LIB libspdk_fsdev_aio.a 00:03:07.692 SYMLINK libspdk_scheduler_gscheduler.so 00:03:07.692 CC module/bdev/error/vbdev_error_rpc.o 00:03:07.692 SO libspdk_fsdev_aio.so.1.0 00:03:07.692 CC module/bdev/malloc/bdev_malloc.o 00:03:07.692 LIB libspdk_sock_posix.a 00:03:07.692 SYMLINK libspdk_fsdev_aio.so 00:03:07.692 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:07.692 SO libspdk_sock_posix.so.6.0 00:03:07.692 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:07.692 CC module/blobfs/bdev/blobfs_bdev.o 00:03:07.692 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:07.692 SYMLINK libspdk_sock_posix.so 00:03:07.692 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:07.692 LIB libspdk_bdev_gpt.a 00:03:07.952 SO libspdk_bdev_gpt.so.6.0 00:03:07.952 LIB libspdk_bdev_delay.a 00:03:07.952 LIB libspdk_bdev_error.a 00:03:07.952 SO libspdk_bdev_delay.so.6.0 00:03:07.952 SO libspdk_bdev_error.so.6.0 00:03:07.952 SYMLINK libspdk_bdev_gpt.so 00:03:07.952 SYMLINK libspdk_bdev_delay.so 00:03:07.952 LIB libspdk_blobfs_bdev.a 00:03:07.952 SYMLINK libspdk_bdev_error.so 00:03:07.952 SO libspdk_blobfs_bdev.so.6.0 00:03:07.952 CC module/bdev/null/bdev_null.o 00:03:07.952 CC module/bdev/nvme/bdev_nvme.o 00:03:07.952 LIB libspdk_bdev_malloc.a 00:03:07.952 CC module/bdev/passthru/vbdev_passthru.o 00:03:07.952 SYMLINK libspdk_blobfs_bdev.so 00:03:07.952 CC module/bdev/null/bdev_null_rpc.o 00:03:07.952 CC module/bdev/raid/bdev_raid.o 00:03:07.952 SO libspdk_bdev_malloc.so.6.0 00:03:08.211 CC module/bdev/split/vbdev_split.o 00:03:08.211 SYMLINK libspdk_bdev_malloc.so 00:03:08.211 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:08.211 LIB libspdk_bdev_lvol.a 00:03:08.211 SO libspdk_bdev_lvol.so.6.0 00:03:08.211 CC module/bdev/aio/bdev_aio.o 00:03:08.211 CC module/bdev/ftl/bdev_ftl.o 00:03:08.211 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:08.211 SYMLINK libspdk_bdev_lvol.so 00:03:08.211 LIB libspdk_bdev_null.a 00:03:08.470 CC module/bdev/split/vbdev_split_rpc.o 00:03:08.470 SO libspdk_bdev_null.so.6.0 00:03:08.470 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:08.470 SYMLINK libspdk_bdev_null.so 00:03:08.470 CC module/bdev/raid/bdev_raid_rpc.o 00:03:08.470 CC module/bdev/iscsi/bdev_iscsi.o 00:03:08.470 LIB libspdk_bdev_zone_block.a 00:03:08.470 LIB libspdk_bdev_split.a 00:03:08.470 SO libspdk_bdev_zone_block.so.6.0 00:03:08.470 SO libspdk_bdev_split.so.6.0 00:03:08.470 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:08.470 LIB libspdk_bdev_passthru.a 00:03:08.729 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:08.729 SO libspdk_bdev_passthru.so.6.0 00:03:08.729 SYMLINK libspdk_bdev_zone_block.so 00:03:08.729 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:08.729 SYMLINK libspdk_bdev_split.so 00:03:08.729 CC module/bdev/raid/bdev_raid_sb.o 00:03:08.729 CC module/bdev/aio/bdev_aio_rpc.o 00:03:08.729 SYMLINK libspdk_bdev_passthru.so 00:03:08.729 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:08.729 CC module/bdev/raid/raid0.o 00:03:08.729 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:08.729 LIB libspdk_bdev_aio.a 00:03:08.729 LIB libspdk_bdev_ftl.a 00:03:08.988 SO libspdk_bdev_ftl.so.6.0 00:03:08.988 SO libspdk_bdev_aio.so.6.0 00:03:08.988 LIB libspdk_bdev_iscsi.a 00:03:08.988 CC module/bdev/raid/raid1.o 00:03:08.988 SO libspdk_bdev_iscsi.so.6.0 00:03:08.988 SYMLINK libspdk_bdev_ftl.so 00:03:08.988 SYMLINK libspdk_bdev_aio.so 00:03:08.988 CC module/bdev/raid/concat.o 00:03:08.988 CC module/bdev/raid/raid5f.o 00:03:08.988 SYMLINK libspdk_bdev_iscsi.so 00:03:08.988 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:08.988 CC module/bdev/nvme/nvme_rpc.o 00:03:08.988 CC module/bdev/nvme/bdev_mdns_client.o 00:03:08.988 CC module/bdev/nvme/vbdev_opal.o 00:03:09.248 LIB libspdk_bdev_virtio.a 00:03:09.248 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:09.248 SO libspdk_bdev_virtio.so.6.0 00:03:09.248 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:09.248 SYMLINK libspdk_bdev_virtio.so 00:03:09.508 LIB libspdk_bdev_raid.a 00:03:09.508 SO libspdk_bdev_raid.so.6.0 00:03:09.768 SYMLINK libspdk_bdev_raid.so 00:03:11.153 LIB libspdk_bdev_nvme.a 00:03:11.153 SO libspdk_bdev_nvme.so.7.1 00:03:11.153 SYMLINK libspdk_bdev_nvme.so 00:03:11.723 CC module/event/subsystems/iobuf/iobuf.o 00:03:11.723 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:11.723 CC module/event/subsystems/keyring/keyring.o 00:03:11.723 CC module/event/subsystems/sock/sock.o 00:03:11.723 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:11.723 CC module/event/subsystems/vmd/vmd.o 00:03:11.723 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:11.723 CC module/event/subsystems/scheduler/scheduler.o 00:03:11.723 CC module/event/subsystems/fsdev/fsdev.o 00:03:11.982 LIB libspdk_event_sock.a 00:03:11.982 LIB libspdk_event_scheduler.a 00:03:11.982 LIB libspdk_event_iobuf.a 00:03:11.982 LIB libspdk_event_vmd.a 00:03:11.982 LIB libspdk_event_fsdev.a 00:03:11.982 SO libspdk_event_sock.so.5.0 00:03:11.982 SO libspdk_event_scheduler.so.4.0 00:03:11.982 LIB libspdk_event_keyring.a 00:03:11.982 SO libspdk_event_iobuf.so.3.0 00:03:11.982 LIB libspdk_event_vhost_blk.a 00:03:11.982 SO libspdk_event_fsdev.so.1.0 00:03:11.982 SO libspdk_event_vmd.so.6.0 00:03:11.982 SO libspdk_event_vhost_blk.so.3.0 00:03:11.982 SO libspdk_event_keyring.so.1.0 00:03:11.982 SYMLINK libspdk_event_sock.so 00:03:11.982 SYMLINK libspdk_event_scheduler.so 00:03:11.982 SYMLINK libspdk_event_iobuf.so 00:03:11.982 SYMLINK libspdk_event_vmd.so 00:03:11.982 SYMLINK libspdk_event_fsdev.so 00:03:11.982 SYMLINK libspdk_event_vhost_blk.so 00:03:11.982 SYMLINK libspdk_event_keyring.so 00:03:12.551 CC module/event/subsystems/accel/accel.o 00:03:12.551 LIB libspdk_event_accel.a 00:03:12.551 SO libspdk_event_accel.so.6.0 00:03:12.811 SYMLINK libspdk_event_accel.so 00:03:13.071 CC module/event/subsystems/bdev/bdev.o 00:03:13.330 LIB libspdk_event_bdev.a 00:03:13.330 SO libspdk_event_bdev.so.6.0 00:03:13.330 SYMLINK libspdk_event_bdev.so 00:03:13.590 CC module/event/subsystems/ublk/ublk.o 00:03:13.590 CC module/event/subsystems/nbd/nbd.o 00:03:13.590 CC module/event/subsystems/scsi/scsi.o 00:03:13.850 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:13.850 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:13.850 LIB libspdk_event_ublk.a 00:03:13.850 LIB libspdk_event_nbd.a 00:03:13.850 LIB libspdk_event_scsi.a 00:03:13.850 SO libspdk_event_ublk.so.3.0 00:03:13.850 SO libspdk_event_nbd.so.6.0 00:03:13.850 SO libspdk_event_scsi.so.6.0 00:03:13.850 SYMLINK libspdk_event_ublk.so 00:03:13.850 SYMLINK libspdk_event_scsi.so 00:03:14.110 SYMLINK libspdk_event_nbd.so 00:03:14.110 LIB libspdk_event_nvmf.a 00:03:14.110 SO libspdk_event_nvmf.so.6.0 00:03:14.110 SYMLINK libspdk_event_nvmf.so 00:03:14.369 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:14.369 CC module/event/subsystems/iscsi/iscsi.o 00:03:14.628 LIB libspdk_event_vhost_scsi.a 00:03:14.628 LIB libspdk_event_iscsi.a 00:03:14.628 SO libspdk_event_vhost_scsi.so.3.0 00:03:14.628 SO libspdk_event_iscsi.so.6.0 00:03:14.628 SYMLINK libspdk_event_vhost_scsi.so 00:03:14.628 SYMLINK libspdk_event_iscsi.so 00:03:14.890 SO libspdk.so.6.0 00:03:14.890 SYMLINK libspdk.so 00:03:15.149 CC app/trace_record/trace_record.o 00:03:15.149 CXX app/trace/trace.o 00:03:15.149 CC app/spdk_lspci/spdk_lspci.o 00:03:15.149 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:15.149 CC app/nvmf_tgt/nvmf_main.o 00:03:15.149 CC app/iscsi_tgt/iscsi_tgt.o 00:03:15.149 CC examples/ioat/perf/perf.o 00:03:15.149 CC app/spdk_tgt/spdk_tgt.o 00:03:15.149 CC examples/util/zipf/zipf.o 00:03:15.409 CC test/thread/poller_perf/poller_perf.o 00:03:15.409 LINK spdk_lspci 00:03:15.409 LINK interrupt_tgt 00:03:15.409 LINK zipf 00:03:15.409 LINK poller_perf 00:03:15.409 LINK nvmf_tgt 00:03:15.409 LINK iscsi_tgt 00:03:15.409 LINK spdk_trace_record 00:03:15.409 LINK spdk_tgt 00:03:15.409 LINK ioat_perf 00:03:15.668 CC app/spdk_nvme_perf/perf.o 00:03:15.668 LINK spdk_trace 00:03:15.668 CC app/spdk_nvme_identify/identify.o 00:03:15.668 CC app/spdk_nvme_discover/discovery_aer.o 00:03:15.668 CC examples/ioat/verify/verify.o 00:03:15.668 CC app/spdk_top/spdk_top.o 00:03:15.927 CC test/dma/test_dma/test_dma.o 00:03:15.927 CC app/spdk_dd/spdk_dd.o 00:03:15.927 CC test/app/bdev_svc/bdev_svc.o 00:03:15.927 LINK spdk_nvme_discover 00:03:15.927 CC app/fio/nvme/fio_plugin.o 00:03:15.927 LINK verify 00:03:15.927 CC examples/thread/thread/thread_ex.o 00:03:16.186 LINK bdev_svc 00:03:16.186 CC app/vhost/vhost.o 00:03:16.186 LINK spdk_dd 00:03:16.445 CC app/fio/bdev/fio_plugin.o 00:03:16.445 LINK thread 00:03:16.445 LINK test_dma 00:03:16.445 LINK vhost 00:03:16.445 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:16.445 LINK spdk_nvme_perf 00:03:16.704 LINK spdk_nvme 00:03:16.704 CC test/app/histogram_perf/histogram_perf.o 00:03:16.704 CC test/app/jsoncat/jsoncat.o 00:03:16.704 LINK spdk_nvme_identify 00:03:16.704 LINK histogram_perf 00:03:16.704 CC examples/sock/hello_world/hello_sock.o 00:03:16.704 CC test/app/stub/stub.o 00:03:16.704 CC examples/vmd/lsvmd/lsvmd.o 00:03:16.963 CC examples/vmd/led/led.o 00:03:16.963 LINK jsoncat 00:03:16.963 LINK spdk_top 00:03:16.963 LINK spdk_bdev 00:03:16.963 LINK lsvmd 00:03:16.963 LINK stub 00:03:16.963 LINK led 00:03:16.963 LINK nvme_fuzz 00:03:16.963 TEST_HEADER include/spdk/accel.h 00:03:16.963 TEST_HEADER include/spdk/accel_module.h 00:03:16.963 TEST_HEADER include/spdk/assert.h 00:03:16.963 TEST_HEADER include/spdk/barrier.h 00:03:16.963 TEST_HEADER include/spdk/base64.h 00:03:16.963 TEST_HEADER include/spdk/bdev.h 00:03:16.963 TEST_HEADER include/spdk/bdev_module.h 00:03:16.963 TEST_HEADER include/spdk/bdev_zone.h 00:03:16.963 TEST_HEADER include/spdk/bit_array.h 00:03:16.963 TEST_HEADER include/spdk/bit_pool.h 00:03:16.963 TEST_HEADER include/spdk/blob_bdev.h 00:03:16.963 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:16.963 TEST_HEADER include/spdk/blobfs.h 00:03:16.963 TEST_HEADER include/spdk/blob.h 00:03:16.963 TEST_HEADER include/spdk/conf.h 00:03:16.963 TEST_HEADER include/spdk/config.h 00:03:16.963 TEST_HEADER include/spdk/cpuset.h 00:03:16.963 TEST_HEADER include/spdk/crc16.h 00:03:16.963 TEST_HEADER include/spdk/crc32.h 00:03:16.963 TEST_HEADER include/spdk/crc64.h 00:03:16.963 TEST_HEADER include/spdk/dif.h 00:03:16.963 TEST_HEADER include/spdk/dma.h 00:03:16.963 TEST_HEADER include/spdk/endian.h 00:03:16.963 TEST_HEADER include/spdk/env_dpdk.h 00:03:16.963 LINK hello_sock 00:03:16.963 TEST_HEADER include/spdk/env.h 00:03:16.963 TEST_HEADER include/spdk/event.h 00:03:16.963 TEST_HEADER include/spdk/fd_group.h 00:03:17.221 TEST_HEADER include/spdk/fd.h 00:03:17.221 TEST_HEADER include/spdk/file.h 00:03:17.221 TEST_HEADER include/spdk/fsdev.h 00:03:17.221 TEST_HEADER include/spdk/fsdev_module.h 00:03:17.221 TEST_HEADER include/spdk/ftl.h 00:03:17.221 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:17.221 TEST_HEADER include/spdk/gpt_spec.h 00:03:17.221 TEST_HEADER include/spdk/hexlify.h 00:03:17.221 TEST_HEADER include/spdk/histogram_data.h 00:03:17.221 TEST_HEADER include/spdk/idxd.h 00:03:17.221 TEST_HEADER include/spdk/idxd_spec.h 00:03:17.221 TEST_HEADER include/spdk/init.h 00:03:17.221 TEST_HEADER include/spdk/ioat.h 00:03:17.221 TEST_HEADER include/spdk/ioat_spec.h 00:03:17.221 TEST_HEADER include/spdk/iscsi_spec.h 00:03:17.221 TEST_HEADER include/spdk/json.h 00:03:17.221 TEST_HEADER include/spdk/jsonrpc.h 00:03:17.221 TEST_HEADER include/spdk/keyring.h 00:03:17.221 TEST_HEADER include/spdk/keyring_module.h 00:03:17.221 TEST_HEADER include/spdk/likely.h 00:03:17.221 TEST_HEADER include/spdk/log.h 00:03:17.221 TEST_HEADER include/spdk/lvol.h 00:03:17.221 TEST_HEADER include/spdk/md5.h 00:03:17.221 TEST_HEADER include/spdk/memory.h 00:03:17.221 TEST_HEADER include/spdk/mmio.h 00:03:17.221 TEST_HEADER include/spdk/nbd.h 00:03:17.221 TEST_HEADER include/spdk/net.h 00:03:17.221 TEST_HEADER include/spdk/notify.h 00:03:17.221 TEST_HEADER include/spdk/nvme.h 00:03:17.221 TEST_HEADER include/spdk/nvme_intel.h 00:03:17.221 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:17.221 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:17.221 CC test/rpc_client/rpc_client_test.o 00:03:17.221 TEST_HEADER include/spdk/nvme_spec.h 00:03:17.221 TEST_HEADER include/spdk/nvme_zns.h 00:03:17.221 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:17.221 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:17.221 TEST_HEADER include/spdk/nvmf.h 00:03:17.221 TEST_HEADER include/spdk/nvmf_spec.h 00:03:17.221 TEST_HEADER include/spdk/nvmf_transport.h 00:03:17.221 TEST_HEADER include/spdk/opal.h 00:03:17.221 TEST_HEADER include/spdk/opal_spec.h 00:03:17.221 TEST_HEADER include/spdk/pci_ids.h 00:03:17.221 TEST_HEADER include/spdk/pipe.h 00:03:17.221 TEST_HEADER include/spdk/queue.h 00:03:17.221 TEST_HEADER include/spdk/reduce.h 00:03:17.221 TEST_HEADER include/spdk/rpc.h 00:03:17.221 TEST_HEADER include/spdk/scheduler.h 00:03:17.221 TEST_HEADER include/spdk/scsi.h 00:03:17.221 TEST_HEADER include/spdk/scsi_spec.h 00:03:17.221 TEST_HEADER include/spdk/sock.h 00:03:17.221 TEST_HEADER include/spdk/stdinc.h 00:03:17.221 TEST_HEADER include/spdk/string.h 00:03:17.221 TEST_HEADER include/spdk/thread.h 00:03:17.221 TEST_HEADER include/spdk/trace.h 00:03:17.221 TEST_HEADER include/spdk/trace_parser.h 00:03:17.221 TEST_HEADER include/spdk/tree.h 00:03:17.221 TEST_HEADER include/spdk/ublk.h 00:03:17.221 TEST_HEADER include/spdk/util.h 00:03:17.221 TEST_HEADER include/spdk/uuid.h 00:03:17.221 CC test/nvme/aer/aer.o 00:03:17.221 CC test/event/event_perf/event_perf.o 00:03:17.221 TEST_HEADER include/spdk/version.h 00:03:17.221 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:17.221 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:17.221 TEST_HEADER include/spdk/vhost.h 00:03:17.221 TEST_HEADER include/spdk/vmd.h 00:03:17.221 TEST_HEADER include/spdk/xor.h 00:03:17.221 TEST_HEADER include/spdk/zipf.h 00:03:17.221 CXX test/cpp_headers/accel.o 00:03:17.221 CC test/env/mem_callbacks/mem_callbacks.o 00:03:17.221 CC test/nvme/reset/reset.o 00:03:17.221 CC test/env/vtophys/vtophys.o 00:03:17.221 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:17.221 CC test/nvme/sgl/sgl.o 00:03:17.479 LINK rpc_client_test 00:03:17.479 CXX test/cpp_headers/accel_module.o 00:03:17.479 LINK event_perf 00:03:17.479 LINK vtophys 00:03:17.479 CC examples/idxd/perf/perf.o 00:03:17.479 LINK reset 00:03:17.479 LINK aer 00:03:17.737 CXX test/cpp_headers/assert.o 00:03:17.737 LINK sgl 00:03:17.737 CC test/event/reactor/reactor.o 00:03:17.737 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:17.737 CC test/accel/dif/dif.o 00:03:17.737 CXX test/cpp_headers/barrier.o 00:03:17.737 LINK reactor 00:03:17.737 CC test/nvme/e2edp/nvme_dp.o 00:03:17.996 LINK idxd_perf 00:03:17.996 LINK mem_callbacks 00:03:17.996 LINK env_dpdk_post_init 00:03:17.996 CXX test/cpp_headers/base64.o 00:03:17.996 CC test/blobfs/mkfs/mkfs.o 00:03:17.996 CC test/event/reactor_perf/reactor_perf.o 00:03:18.256 CC test/env/memory/memory_ut.o 00:03:18.256 CC test/nvme/overhead/overhead.o 00:03:18.256 CC test/lvol/esnap/esnap.o 00:03:18.256 CXX test/cpp_headers/bdev.o 00:03:18.256 LINK nvme_dp 00:03:18.256 LINK mkfs 00:03:18.256 LINK reactor_perf 00:03:18.256 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:18.256 CXX test/cpp_headers/bdev_module.o 00:03:18.515 LINK overhead 00:03:18.515 CC test/nvme/err_injection/err_injection.o 00:03:18.515 CC test/nvme/startup/startup.o 00:03:18.515 CC test/event/app_repeat/app_repeat.o 00:03:18.515 LINK dif 00:03:18.515 CXX test/cpp_headers/bdev_zone.o 00:03:18.515 LINK hello_fsdev 00:03:18.774 LINK app_repeat 00:03:18.774 LINK startup 00:03:18.774 LINK err_injection 00:03:18.774 CC test/nvme/reserve/reserve.o 00:03:18.774 CXX test/cpp_headers/bit_array.o 00:03:18.774 CXX test/cpp_headers/bit_pool.o 00:03:18.774 CC test/env/pci/pci_ut.o 00:03:19.033 CC test/event/scheduler/scheduler.o 00:03:19.033 LINK reserve 00:03:19.033 CXX test/cpp_headers/blob_bdev.o 00:03:19.033 CC examples/accel/perf/accel_perf.o 00:03:19.033 CC examples/blob/hello_world/hello_blob.o 00:03:19.033 CC test/bdev/bdevio/bdevio.o 00:03:19.292 LINK scheduler 00:03:19.292 CXX test/cpp_headers/blobfs_bdev.o 00:03:19.292 CC test/nvme/simple_copy/simple_copy.o 00:03:19.292 LINK hello_blob 00:03:19.292 LINK pci_ut 00:03:19.292 LINK iscsi_fuzz 00:03:19.292 CXX test/cpp_headers/blobfs.o 00:03:19.551 LINK memory_ut 00:03:19.551 LINK simple_copy 00:03:19.552 CC examples/blob/cli/blobcli.o 00:03:19.552 CXX test/cpp_headers/blob.o 00:03:19.552 LINK bdevio 00:03:19.552 LINK accel_perf 00:03:19.552 CXX test/cpp_headers/conf.o 00:03:19.552 CXX test/cpp_headers/config.o 00:03:19.552 CXX test/cpp_headers/cpuset.o 00:03:19.810 CXX test/cpp_headers/crc16.o 00:03:19.810 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:19.810 CC examples/nvme/hello_world/hello_world.o 00:03:19.810 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:19.810 CC test/nvme/connect_stress/connect_stress.o 00:03:19.810 CXX test/cpp_headers/crc32.o 00:03:19.810 CXX test/cpp_headers/crc64.o 00:03:19.810 CXX test/cpp_headers/dif.o 00:03:19.810 CXX test/cpp_headers/dma.o 00:03:19.810 CXX test/cpp_headers/endian.o 00:03:20.069 CC test/nvme/boot_partition/boot_partition.o 00:03:20.069 CXX test/cpp_headers/env_dpdk.o 00:03:20.069 LINK hello_world 00:03:20.069 LINK connect_stress 00:03:20.069 LINK blobcli 00:03:20.069 CC test/nvme/compliance/nvme_compliance.o 00:03:20.069 CC test/nvme/fused_ordering/fused_ordering.o 00:03:20.069 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:20.069 LINK boot_partition 00:03:20.069 CXX test/cpp_headers/env.o 00:03:20.069 CXX test/cpp_headers/event.o 00:03:20.327 LINK vhost_fuzz 00:03:20.328 CC examples/nvme/reconnect/reconnect.o 00:03:20.328 LINK fused_ordering 00:03:20.328 LINK doorbell_aers 00:03:20.328 CXX test/cpp_headers/fd_group.o 00:03:20.328 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:20.328 CC examples/bdev/hello_world/hello_bdev.o 00:03:20.586 CC examples/bdev/bdevperf/bdevperf.o 00:03:20.586 LINK nvme_compliance 00:03:20.586 CC examples/nvme/arbitration/arbitration.o 00:03:20.586 CXX test/cpp_headers/fd.o 00:03:20.586 CC test/nvme/fdp/fdp.o 00:03:20.586 CC test/nvme/cuse/cuse.o 00:03:20.586 CXX test/cpp_headers/file.o 00:03:20.586 LINK reconnect 00:03:20.586 LINK hello_bdev 00:03:20.586 CXX test/cpp_headers/fsdev.o 00:03:20.844 CXX test/cpp_headers/fsdev_module.o 00:03:20.844 LINK arbitration 00:03:20.844 CXX test/cpp_headers/ftl.o 00:03:20.844 CXX test/cpp_headers/fuse_dispatcher.o 00:03:20.844 LINK fdp 00:03:20.844 CC examples/nvme/hotplug/hotplug.o 00:03:20.844 CXX test/cpp_headers/gpt_spec.o 00:03:21.103 LINK nvme_manage 00:03:21.103 CXX test/cpp_headers/hexlify.o 00:03:21.103 CXX test/cpp_headers/histogram_data.o 00:03:21.103 CXX test/cpp_headers/idxd.o 00:03:21.103 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:21.103 CXX test/cpp_headers/idxd_spec.o 00:03:21.103 CXX test/cpp_headers/init.o 00:03:21.103 CC examples/nvme/abort/abort.o 00:03:21.103 LINK hotplug 00:03:21.361 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:21.361 CXX test/cpp_headers/ioat.o 00:03:21.361 LINK cmb_copy 00:03:21.361 CXX test/cpp_headers/ioat_spec.o 00:03:21.361 CXX test/cpp_headers/iscsi_spec.o 00:03:21.361 CXX test/cpp_headers/json.o 00:03:21.361 LINK bdevperf 00:03:21.361 CXX test/cpp_headers/jsonrpc.o 00:03:21.361 LINK pmr_persistence 00:03:21.361 CXX test/cpp_headers/keyring.o 00:03:21.361 CXX test/cpp_headers/keyring_module.o 00:03:21.620 CXX test/cpp_headers/likely.o 00:03:21.620 CXX test/cpp_headers/log.o 00:03:21.620 CXX test/cpp_headers/lvol.o 00:03:21.620 CXX test/cpp_headers/md5.o 00:03:21.620 CXX test/cpp_headers/memory.o 00:03:21.620 CXX test/cpp_headers/mmio.o 00:03:21.620 LINK abort 00:03:21.620 CXX test/cpp_headers/nbd.o 00:03:21.620 CXX test/cpp_headers/net.o 00:03:21.620 CXX test/cpp_headers/notify.o 00:03:21.620 CXX test/cpp_headers/nvme.o 00:03:21.620 CXX test/cpp_headers/nvme_intel.o 00:03:21.879 CXX test/cpp_headers/nvme_ocssd.o 00:03:21.879 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:21.879 CXX test/cpp_headers/nvme_spec.o 00:03:21.879 CXX test/cpp_headers/nvme_zns.o 00:03:21.879 CXX test/cpp_headers/nvmf_cmd.o 00:03:21.879 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:21.879 CXX test/cpp_headers/nvmf.o 00:03:21.879 CXX test/cpp_headers/nvmf_spec.o 00:03:21.879 CXX test/cpp_headers/nvmf_transport.o 00:03:21.879 LINK cuse 00:03:21.879 CXX test/cpp_headers/opal.o 00:03:22.159 CC examples/nvmf/nvmf/nvmf.o 00:03:22.159 CXX test/cpp_headers/opal_spec.o 00:03:22.159 CXX test/cpp_headers/pci_ids.o 00:03:22.159 CXX test/cpp_headers/pipe.o 00:03:22.159 CXX test/cpp_headers/queue.o 00:03:22.159 CXX test/cpp_headers/reduce.o 00:03:22.159 CXX test/cpp_headers/rpc.o 00:03:22.159 CXX test/cpp_headers/scheduler.o 00:03:22.159 CXX test/cpp_headers/scsi.o 00:03:22.159 CXX test/cpp_headers/scsi_spec.o 00:03:22.159 CXX test/cpp_headers/sock.o 00:03:22.159 CXX test/cpp_headers/stdinc.o 00:03:22.159 CXX test/cpp_headers/string.o 00:03:22.159 CXX test/cpp_headers/thread.o 00:03:22.159 CXX test/cpp_headers/trace.o 00:03:22.418 CXX test/cpp_headers/trace_parser.o 00:03:22.418 LINK nvmf 00:03:22.418 CXX test/cpp_headers/tree.o 00:03:22.418 CXX test/cpp_headers/ublk.o 00:03:22.418 CXX test/cpp_headers/util.o 00:03:22.418 CXX test/cpp_headers/uuid.o 00:03:22.418 CXX test/cpp_headers/version.o 00:03:22.418 CXX test/cpp_headers/vfio_user_pci.o 00:03:22.418 CXX test/cpp_headers/vfio_user_spec.o 00:03:22.418 CXX test/cpp_headers/vhost.o 00:03:22.418 CXX test/cpp_headers/vmd.o 00:03:22.418 CXX test/cpp_headers/xor.o 00:03:22.418 CXX test/cpp_headers/zipf.o 00:03:24.963 LINK esnap 00:03:24.963 00:03:24.963 real 1m28.732s 00:03:24.963 user 8m8.659s 00:03:24.963 sys 1m37.877s 00:03:24.963 09:40:50 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:24.963 09:40:50 make -- common/autotest_common.sh@10 -- $ set +x 00:03:24.963 ************************************ 00:03:24.963 END TEST make 00:03:24.963 ************************************ 00:03:24.963 09:40:50 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:24.963 09:40:50 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:24.963 09:40:50 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:24.963 09:40:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.963 09:40:50 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:24.963 09:40:50 -- pm/common@44 -- $ pid=5472 00:03:24.963 09:40:50 -- pm/common@50 -- $ kill -TERM 5472 00:03:24.963 09:40:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.963 09:40:50 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:24.963 09:40:50 -- pm/common@44 -- $ pid=5474 00:03:24.963 09:40:50 -- pm/common@50 -- $ kill -TERM 5474 00:03:24.963 09:40:50 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:24.963 09:40:50 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:24.963 09:40:50 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:24.963 09:40:50 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:24.963 09:40:50 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:25.222 09:40:50 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:25.222 09:40:50 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:25.222 09:40:50 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:25.222 09:40:50 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:25.222 09:40:50 -- scripts/common.sh@336 -- # IFS=.-: 00:03:25.222 09:40:50 -- scripts/common.sh@336 -- # read -ra ver1 00:03:25.222 09:40:50 -- scripts/common.sh@337 -- # IFS=.-: 00:03:25.222 09:40:50 -- scripts/common.sh@337 -- # read -ra ver2 00:03:25.222 09:40:50 -- scripts/common.sh@338 -- # local 'op=<' 00:03:25.222 09:40:50 -- scripts/common.sh@340 -- # ver1_l=2 00:03:25.222 09:40:50 -- scripts/common.sh@341 -- # ver2_l=1 00:03:25.222 09:40:50 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:25.222 09:40:50 -- scripts/common.sh@344 -- # case "$op" in 00:03:25.222 09:40:50 -- scripts/common.sh@345 -- # : 1 00:03:25.222 09:40:50 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:25.222 09:40:50 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:25.222 09:40:50 -- scripts/common.sh@365 -- # decimal 1 00:03:25.222 09:40:50 -- scripts/common.sh@353 -- # local d=1 00:03:25.222 09:40:50 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:25.222 09:40:50 -- scripts/common.sh@355 -- # echo 1 00:03:25.222 09:40:50 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:25.222 09:40:50 -- scripts/common.sh@366 -- # decimal 2 00:03:25.222 09:40:50 -- scripts/common.sh@353 -- # local d=2 00:03:25.222 09:40:50 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:25.222 09:40:50 -- scripts/common.sh@355 -- # echo 2 00:03:25.222 09:40:50 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:25.222 09:40:50 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:25.222 09:40:50 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:25.222 09:40:50 -- scripts/common.sh@368 -- # return 0 00:03:25.222 09:40:50 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:25.222 09:40:50 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:25.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.222 --rc genhtml_branch_coverage=1 00:03:25.222 --rc genhtml_function_coverage=1 00:03:25.223 --rc genhtml_legend=1 00:03:25.223 --rc geninfo_all_blocks=1 00:03:25.223 --rc geninfo_unexecuted_blocks=1 00:03:25.223 00:03:25.223 ' 00:03:25.223 09:40:50 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:25.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.223 --rc genhtml_branch_coverage=1 00:03:25.223 --rc genhtml_function_coverage=1 00:03:25.223 --rc genhtml_legend=1 00:03:25.223 --rc geninfo_all_blocks=1 00:03:25.223 --rc geninfo_unexecuted_blocks=1 00:03:25.223 00:03:25.223 ' 00:03:25.223 09:40:50 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:25.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.223 --rc genhtml_branch_coverage=1 00:03:25.223 --rc genhtml_function_coverage=1 00:03:25.223 --rc genhtml_legend=1 00:03:25.223 --rc geninfo_all_blocks=1 00:03:25.223 --rc geninfo_unexecuted_blocks=1 00:03:25.223 00:03:25.223 ' 00:03:25.223 09:40:50 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:25.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.223 --rc genhtml_branch_coverage=1 00:03:25.223 --rc genhtml_function_coverage=1 00:03:25.223 --rc genhtml_legend=1 00:03:25.223 --rc geninfo_all_blocks=1 00:03:25.223 --rc geninfo_unexecuted_blocks=1 00:03:25.223 00:03:25.223 ' 00:03:25.223 09:40:50 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:25.223 09:40:50 -- nvmf/common.sh@7 -- # uname -s 00:03:25.223 09:40:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:25.223 09:40:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:25.223 09:40:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:25.223 09:40:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:25.223 09:40:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:25.223 09:40:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:25.223 09:40:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:25.223 09:40:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:25.223 09:40:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:25.223 09:40:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:25.223 09:40:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:884f5f63-3933-4296-a08b-b3110049e843 00:03:25.223 09:40:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=884f5f63-3933-4296-a08b-b3110049e843 00:03:25.223 09:40:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:25.223 09:40:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:25.223 09:40:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:25.223 09:40:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:25.223 09:40:50 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:25.223 09:40:50 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:25.223 09:40:50 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:25.223 09:40:50 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:25.223 09:40:50 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:25.223 09:40:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.223 09:40:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.223 09:40:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.223 09:40:50 -- paths/export.sh@5 -- # export PATH 00:03:25.223 09:40:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.223 09:40:50 -- nvmf/common.sh@51 -- # : 0 00:03:25.223 09:40:50 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:25.223 09:40:50 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:25.223 09:40:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:25.223 09:40:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:25.223 09:40:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:25.223 09:40:50 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:25.223 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:25.223 09:40:50 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:25.223 09:40:50 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:25.223 09:40:50 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:25.223 09:40:50 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:25.223 09:40:50 -- spdk/autotest.sh@32 -- # uname -s 00:03:25.223 09:40:50 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:25.223 09:40:50 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:25.223 09:40:50 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:25.223 09:40:50 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:25.223 09:40:50 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:25.223 09:40:50 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:25.223 09:40:50 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:25.223 09:40:50 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:25.223 09:40:50 -- spdk/autotest.sh@48 -- # udevadm_pid=54493 00:03:25.223 09:40:50 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:25.223 09:40:50 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:25.223 09:40:50 -- pm/common@17 -- # local monitor 00:03:25.223 09:40:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.223 09:40:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.223 09:40:50 -- pm/common@25 -- # sleep 1 00:03:25.223 09:40:50 -- pm/common@21 -- # date +%s 00:03:25.223 09:40:50 -- pm/common@21 -- # date +%s 00:03:25.223 09:40:50 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733478050 00:03:25.223 09:40:50 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733478050 00:03:25.484 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733478050_collect-cpu-load.pm.log 00:03:25.484 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733478050_collect-vmstat.pm.log 00:03:26.444 09:40:51 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:26.444 09:40:51 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:26.444 09:40:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:26.444 09:40:51 -- common/autotest_common.sh@10 -- # set +x 00:03:26.444 09:40:51 -- spdk/autotest.sh@59 -- # create_test_list 00:03:26.444 09:40:51 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:26.444 09:40:51 -- common/autotest_common.sh@10 -- # set +x 00:03:26.444 09:40:51 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:26.444 09:40:51 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:26.444 09:40:51 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:26.444 09:40:51 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:26.444 09:40:51 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:26.444 09:40:51 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:26.444 09:40:51 -- common/autotest_common.sh@1457 -- # uname 00:03:26.444 09:40:51 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:26.444 09:40:51 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:26.444 09:40:51 -- common/autotest_common.sh@1477 -- # uname 00:03:26.444 09:40:51 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:26.444 09:40:51 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:26.444 09:40:51 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:26.444 lcov: LCOV version 1.15 00:03:26.444 09:40:51 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:41.362 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:41.362 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:56.262 09:41:20 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:56.262 09:41:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:56.262 09:41:20 -- common/autotest_common.sh@10 -- # set +x 00:03:56.262 09:41:20 -- spdk/autotest.sh@78 -- # rm -f 00:03:56.262 09:41:20 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:56.262 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:56.262 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:56.262 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:56.262 09:41:20 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:56.262 09:41:20 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:56.262 09:41:20 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:56.262 09:41:20 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:56.262 09:41:20 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:56.262 09:41:20 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:56.262 09:41:20 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:56.262 09:41:20 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:03:56.262 09:41:20 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:56.262 09:41:20 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:56.262 09:41:20 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:56.262 09:41:20 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:56.262 09:41:20 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:56.263 09:41:20 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:56.263 09:41:20 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:03:56.263 09:41:20 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:56.263 09:41:20 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:03:56.263 09:41:20 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:56.263 09:41:20 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:56.263 09:41:20 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:56.263 09:41:20 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:56.263 09:41:20 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:03:56.263 09:41:20 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:03:56.263 09:41:20 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:56.263 09:41:20 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:56.263 09:41:20 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:56.263 09:41:21 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:03:56.263 09:41:21 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:03:56.263 09:41:21 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:56.263 09:41:21 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:56.263 09:41:21 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:56.263 09:41:21 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:56.263 09:41:21 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:56.263 09:41:21 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:56.263 09:41:21 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:56.263 09:41:21 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:56.263 No valid GPT data, bailing 00:03:56.263 09:41:21 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:56.263 09:41:21 -- scripts/common.sh@394 -- # pt= 00:03:56.263 09:41:21 -- scripts/common.sh@395 -- # return 1 00:03:56.263 09:41:21 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:56.263 1+0 records in 00:03:56.263 1+0 records out 00:03:56.263 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00361639 s, 290 MB/s 00:03:56.263 09:41:21 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:56.263 09:41:21 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:56.263 09:41:21 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:56.263 09:41:21 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:56.263 09:41:21 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:56.263 No valid GPT data, bailing 00:03:56.263 09:41:21 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:56.263 09:41:21 -- scripts/common.sh@394 -- # pt= 00:03:56.263 09:41:21 -- scripts/common.sh@395 -- # return 1 00:03:56.263 09:41:21 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:56.263 1+0 records in 00:03:56.263 1+0 records out 00:03:56.263 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00790286 s, 133 MB/s 00:03:56.263 09:41:21 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:56.263 09:41:21 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:56.263 09:41:21 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:03:56.263 09:41:21 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:03:56.263 09:41:21 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:56.263 No valid GPT data, bailing 00:03:56.263 09:41:21 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:56.263 09:41:21 -- scripts/common.sh@394 -- # pt= 00:03:56.263 09:41:21 -- scripts/common.sh@395 -- # return 1 00:03:56.263 09:41:21 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:56.263 1+0 records in 00:03:56.263 1+0 records out 00:03:56.263 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0068725 s, 153 MB/s 00:03:56.263 09:41:21 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:56.263 09:41:21 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:56.263 09:41:21 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:03:56.263 09:41:21 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:03:56.263 09:41:21 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:56.263 No valid GPT data, bailing 00:03:56.263 09:41:21 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:56.263 09:41:21 -- scripts/common.sh@394 -- # pt= 00:03:56.263 09:41:21 -- scripts/common.sh@395 -- # return 1 00:03:56.263 09:41:21 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:56.263 1+0 records in 00:03:56.263 1+0 records out 00:03:56.263 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0063402 s, 165 MB/s 00:03:56.263 09:41:21 -- spdk/autotest.sh@105 -- # sync 00:03:56.263 09:41:21 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:56.263 09:41:21 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:56.263 09:41:21 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:58.809 09:41:24 -- spdk/autotest.sh@111 -- # uname -s 00:03:58.809 09:41:24 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:58.809 09:41:24 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:58.809 09:41:24 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:59.745 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:59.745 Hugepages 00:03:59.745 node hugesize free / total 00:03:59.745 node0 1048576kB 0 / 0 00:03:59.745 node0 2048kB 0 / 0 00:03:59.745 00:03:59.745 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:59.745 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:59.745 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:00.012 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:00.012 09:41:25 -- spdk/autotest.sh@117 -- # uname -s 00:04:00.012 09:41:25 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:00.012 09:41:25 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:00.012 09:41:25 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:00.950 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.950 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.950 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.950 09:41:26 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:01.886 09:41:27 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:01.886 09:41:27 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:01.886 09:41:27 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:01.886 09:41:27 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:01.886 09:41:27 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:01.886 09:41:27 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:01.886 09:41:27 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:01.886 09:41:27 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:01.886 09:41:27 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:02.145 09:41:27 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:02.145 09:41:27 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:02.145 09:41:27 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:02.404 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.404 Waiting for block devices as requested 00:04:02.663 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:02.663 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:02.663 09:41:27 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:02.663 09:41:27 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:02.663 09:41:27 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:02.663 09:41:27 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:02.663 09:41:27 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:02.663 09:41:27 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:02.663 09:41:27 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:02.663 09:41:27 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:02.663 09:41:27 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:02.663 09:41:27 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:02.663 09:41:27 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:02.663 09:41:27 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:02.663 09:41:27 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:02.923 09:41:27 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:02.923 09:41:27 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:02.923 09:41:27 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:02.923 09:41:27 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:02.923 09:41:27 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:02.923 09:41:27 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:02.923 09:41:27 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:02.923 09:41:27 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:02.923 09:41:27 -- common/autotest_common.sh@1543 -- # continue 00:04:02.923 09:41:27 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:02.923 09:41:27 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:02.923 09:41:27 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:02.923 09:41:27 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:02.923 09:41:27 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:02.923 09:41:27 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:02.923 09:41:27 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:02.923 09:41:27 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:02.923 09:41:27 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:02.923 09:41:27 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:02.923 09:41:27 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:02.923 09:41:27 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:02.923 09:41:27 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:02.923 09:41:27 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:02.923 09:41:27 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:02.923 09:41:27 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:02.923 09:41:27 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:02.923 09:41:27 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:02.923 09:41:27 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:02.923 09:41:27 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:02.923 09:41:27 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:02.923 09:41:27 -- common/autotest_common.sh@1543 -- # continue 00:04:02.923 09:41:27 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:02.923 09:41:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:02.923 09:41:27 -- common/autotest_common.sh@10 -- # set +x 00:04:02.923 09:41:28 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:02.923 09:41:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:02.923 09:41:28 -- common/autotest_common.sh@10 -- # set +x 00:04:02.923 09:41:28 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:03.863 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.863 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.863 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.863 09:41:29 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:03.863 09:41:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:03.863 09:41:29 -- common/autotest_common.sh@10 -- # set +x 00:04:03.863 09:41:29 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:03.863 09:41:29 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:03.863 09:41:29 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:03.863 09:41:29 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:03.863 09:41:29 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:03.863 09:41:29 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:03.863 09:41:29 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:03.863 09:41:29 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:03.863 09:41:29 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:03.863 09:41:29 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:03.863 09:41:29 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:03.863 09:41:29 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:03.863 09:41:29 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:04.122 09:41:29 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:04.123 09:41:29 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:04.123 09:41:29 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:04.123 09:41:29 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:04.123 09:41:29 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:04.123 09:41:29 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:04.123 09:41:29 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:04.123 09:41:29 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:04.123 09:41:29 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:04.123 09:41:29 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:04.123 09:41:29 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:04.123 09:41:29 -- common/autotest_common.sh@1572 -- # return 0 00:04:04.123 09:41:29 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:04.123 09:41:29 -- common/autotest_common.sh@1580 -- # return 0 00:04:04.123 09:41:29 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:04.123 09:41:29 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:04.123 09:41:29 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:04.123 09:41:29 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:04.123 09:41:29 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:04.123 09:41:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:04.123 09:41:29 -- common/autotest_common.sh@10 -- # set +x 00:04:04.123 09:41:29 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:04.123 09:41:29 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:04.123 09:41:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.123 09:41:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.123 09:41:29 -- common/autotest_common.sh@10 -- # set +x 00:04:04.123 ************************************ 00:04:04.123 START TEST env 00:04:04.123 ************************************ 00:04:04.123 09:41:29 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:04.123 * Looking for test storage... 00:04:04.123 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:04.123 09:41:29 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:04.123 09:41:29 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:04.123 09:41:29 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:04.123 09:41:29 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:04.123 09:41:29 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:04.123 09:41:29 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:04.123 09:41:29 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:04.123 09:41:29 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:04.123 09:41:29 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:04.123 09:41:29 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:04.123 09:41:29 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:04.123 09:41:29 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:04.123 09:41:29 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:04.397 09:41:29 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:04.397 09:41:29 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:04.397 09:41:29 env -- scripts/common.sh@344 -- # case "$op" in 00:04:04.397 09:41:29 env -- scripts/common.sh@345 -- # : 1 00:04:04.397 09:41:29 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:04.397 09:41:29 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:04.397 09:41:29 env -- scripts/common.sh@365 -- # decimal 1 00:04:04.397 09:41:29 env -- scripts/common.sh@353 -- # local d=1 00:04:04.397 09:41:29 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:04.397 09:41:29 env -- scripts/common.sh@355 -- # echo 1 00:04:04.397 09:41:29 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:04.397 09:41:29 env -- scripts/common.sh@366 -- # decimal 2 00:04:04.397 09:41:29 env -- scripts/common.sh@353 -- # local d=2 00:04:04.397 09:41:29 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:04.397 09:41:29 env -- scripts/common.sh@355 -- # echo 2 00:04:04.397 09:41:29 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:04.397 09:41:29 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:04.397 09:41:29 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:04.397 09:41:29 env -- scripts/common.sh@368 -- # return 0 00:04:04.397 09:41:29 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:04.397 09:41:29 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:04.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.397 --rc genhtml_branch_coverage=1 00:04:04.397 --rc genhtml_function_coverage=1 00:04:04.397 --rc genhtml_legend=1 00:04:04.397 --rc geninfo_all_blocks=1 00:04:04.397 --rc geninfo_unexecuted_blocks=1 00:04:04.397 00:04:04.398 ' 00:04:04.398 09:41:29 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:04.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.398 --rc genhtml_branch_coverage=1 00:04:04.398 --rc genhtml_function_coverage=1 00:04:04.398 --rc genhtml_legend=1 00:04:04.398 --rc geninfo_all_blocks=1 00:04:04.398 --rc geninfo_unexecuted_blocks=1 00:04:04.398 00:04:04.398 ' 00:04:04.398 09:41:29 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:04.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.398 --rc genhtml_branch_coverage=1 00:04:04.398 --rc genhtml_function_coverage=1 00:04:04.398 --rc genhtml_legend=1 00:04:04.398 --rc geninfo_all_blocks=1 00:04:04.398 --rc geninfo_unexecuted_blocks=1 00:04:04.398 00:04:04.398 ' 00:04:04.398 09:41:29 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:04.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:04.398 --rc genhtml_branch_coverage=1 00:04:04.398 --rc genhtml_function_coverage=1 00:04:04.398 --rc genhtml_legend=1 00:04:04.398 --rc geninfo_all_blocks=1 00:04:04.398 --rc geninfo_unexecuted_blocks=1 00:04:04.398 00:04:04.398 ' 00:04:04.398 09:41:29 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:04.398 09:41:29 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.398 09:41:29 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.398 09:41:29 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.398 ************************************ 00:04:04.398 START TEST env_memory 00:04:04.398 ************************************ 00:04:04.398 09:41:29 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:04.398 00:04:04.398 00:04:04.398 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.398 http://cunit.sourceforge.net/ 00:04:04.398 00:04:04.398 00:04:04.398 Suite: memory 00:04:04.398 Test: alloc and free memory map ...[2024-12-06 09:41:29.493815] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:04.398 passed 00:04:04.398 Test: mem map translation ...[2024-12-06 09:41:29.567590] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:04.398 [2024-12-06 09:41:29.567669] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:04.398 [2024-12-06 09:41:29.567747] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:04.398 [2024-12-06 09:41:29.567786] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:04.398 passed 00:04:04.398 Test: mem map registration ...[2024-12-06 09:41:29.633580] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:04.398 [2024-12-06 09:41:29.633650] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:04.672 passed 00:04:04.672 Test: mem map adjacent registrations ...passed 00:04:04.672 00:04:04.672 Run Summary: Type Total Ran Passed Failed Inactive 00:04:04.672 suites 1 1 n/a 0 0 00:04:04.672 tests 4 4 4 0 0 00:04:04.672 asserts 152 152 152 0 n/a 00:04:04.672 00:04:04.672 Elapsed time = 0.277 seconds 00:04:04.672 00:04:04.672 real 0m0.317s 00:04:04.672 user 0m0.282s 00:04:04.672 sys 0m0.027s 00:04:04.672 09:41:29 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.672 09:41:29 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:04.672 ************************************ 00:04:04.672 END TEST env_memory 00:04:04.672 ************************************ 00:04:04.672 09:41:29 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:04.672 09:41:29 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.672 09:41:29 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.672 09:41:29 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.673 ************************************ 00:04:04.673 START TEST env_vtophys 00:04:04.673 ************************************ 00:04:04.673 09:41:29 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:04.673 EAL: lib.eal log level changed from notice to debug 00:04:04.673 EAL: Detected lcore 0 as core 0 on socket 0 00:04:04.673 EAL: Detected lcore 1 as core 0 on socket 0 00:04:04.673 EAL: Detected lcore 2 as core 0 on socket 0 00:04:04.673 EAL: Detected lcore 3 as core 0 on socket 0 00:04:04.673 EAL: Detected lcore 4 as core 0 on socket 0 00:04:04.673 EAL: Detected lcore 5 as core 0 on socket 0 00:04:04.673 EAL: Detected lcore 6 as core 0 on socket 0 00:04:04.673 EAL: Detected lcore 7 as core 0 on socket 0 00:04:04.673 EAL: Detected lcore 8 as core 0 on socket 0 00:04:04.673 EAL: Detected lcore 9 as core 0 on socket 0 00:04:04.673 EAL: Maximum logical cores by configuration: 128 00:04:04.673 EAL: Detected CPU lcores: 10 00:04:04.673 EAL: Detected NUMA nodes: 1 00:04:04.673 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:04.673 EAL: Detected shared linkage of DPDK 00:04:04.673 EAL: No shared files mode enabled, IPC will be disabled 00:04:04.673 EAL: Selected IOVA mode 'PA' 00:04:04.673 EAL: Probing VFIO support... 00:04:04.673 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:04.673 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:04.673 EAL: Ask a virtual area of 0x2e000 bytes 00:04:04.673 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:04.673 EAL: Setting up physically contiguous memory... 00:04:04.673 EAL: Setting maximum number of open files to 524288 00:04:04.673 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:04.673 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:04.673 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.673 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:04.673 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.673 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.673 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:04.673 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:04.673 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.673 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:04.673 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.673 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.673 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:04.673 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:04.673 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.673 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:04.673 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.673 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.673 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:04.673 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:04.673 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.673 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:04.673 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.673 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.673 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:04.673 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:04.673 EAL: Hugepages will be freed exactly as allocated. 00:04:04.673 EAL: No shared files mode enabled, IPC is disabled 00:04:04.673 EAL: No shared files mode enabled, IPC is disabled 00:04:04.933 EAL: TSC frequency is ~2290000 KHz 00:04:04.933 EAL: Main lcore 0 is ready (tid=7fa85dcbaa40;cpuset=[0]) 00:04:04.933 EAL: Trying to obtain current memory policy. 00:04:04.933 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.933 EAL: Restoring previous memory policy: 0 00:04:04.933 EAL: request: mp_malloc_sync 00:04:04.933 EAL: No shared files mode enabled, IPC is disabled 00:04:04.933 EAL: Heap on socket 0 was expanded by 2MB 00:04:04.933 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:04.933 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:04.933 EAL: Mem event callback 'spdk:(nil)' registered 00:04:04.933 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:04.933 00:04:04.933 00:04:04.933 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.933 http://cunit.sourceforge.net/ 00:04:04.933 00:04:04.933 00:04:04.933 Suite: components_suite 00:04:05.193 Test: vtophys_malloc_test ...passed 00:04:05.193 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:05.193 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.193 EAL: Restoring previous memory policy: 4 00:04:05.193 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.193 EAL: request: mp_malloc_sync 00:04:05.193 EAL: No shared files mode enabled, IPC is disabled 00:04:05.194 EAL: Heap on socket 0 was expanded by 4MB 00:04:05.194 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.194 EAL: request: mp_malloc_sync 00:04:05.194 EAL: No shared files mode enabled, IPC is disabled 00:04:05.194 EAL: Heap on socket 0 was shrunk by 4MB 00:04:05.194 EAL: Trying to obtain current memory policy. 00:04:05.194 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.194 EAL: Restoring previous memory policy: 4 00:04:05.194 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.194 EAL: request: mp_malloc_sync 00:04:05.194 EAL: No shared files mode enabled, IPC is disabled 00:04:05.194 EAL: Heap on socket 0 was expanded by 6MB 00:04:05.194 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.194 EAL: request: mp_malloc_sync 00:04:05.194 EAL: No shared files mode enabled, IPC is disabled 00:04:05.194 EAL: Heap on socket 0 was shrunk by 6MB 00:04:05.194 EAL: Trying to obtain current memory policy. 00:04:05.194 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.194 EAL: Restoring previous memory policy: 4 00:04:05.194 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.194 EAL: request: mp_malloc_sync 00:04:05.194 EAL: No shared files mode enabled, IPC is disabled 00:04:05.194 EAL: Heap on socket 0 was expanded by 10MB 00:04:05.194 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.194 EAL: request: mp_malloc_sync 00:04:05.194 EAL: No shared files mode enabled, IPC is disabled 00:04:05.194 EAL: Heap on socket 0 was shrunk by 10MB 00:04:05.194 EAL: Trying to obtain current memory policy. 00:04:05.194 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.194 EAL: Restoring previous memory policy: 4 00:04:05.194 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.194 EAL: request: mp_malloc_sync 00:04:05.194 EAL: No shared files mode enabled, IPC is disabled 00:04:05.194 EAL: Heap on socket 0 was expanded by 18MB 00:04:05.453 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.453 EAL: request: mp_malloc_sync 00:04:05.453 EAL: No shared files mode enabled, IPC is disabled 00:04:05.453 EAL: Heap on socket 0 was shrunk by 18MB 00:04:05.453 EAL: Trying to obtain current memory policy. 00:04:05.453 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.453 EAL: Restoring previous memory policy: 4 00:04:05.453 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.453 EAL: request: mp_malloc_sync 00:04:05.453 EAL: No shared files mode enabled, IPC is disabled 00:04:05.453 EAL: Heap on socket 0 was expanded by 34MB 00:04:05.453 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.453 EAL: request: mp_malloc_sync 00:04:05.453 EAL: No shared files mode enabled, IPC is disabled 00:04:05.453 EAL: Heap on socket 0 was shrunk by 34MB 00:04:05.453 EAL: Trying to obtain current memory policy. 00:04:05.453 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.453 EAL: Restoring previous memory policy: 4 00:04:05.453 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.453 EAL: request: mp_malloc_sync 00:04:05.453 EAL: No shared files mode enabled, IPC is disabled 00:04:05.453 EAL: Heap on socket 0 was expanded by 66MB 00:04:05.713 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.713 EAL: request: mp_malloc_sync 00:04:05.713 EAL: No shared files mode enabled, IPC is disabled 00:04:05.713 EAL: Heap on socket 0 was shrunk by 66MB 00:04:05.713 EAL: Trying to obtain current memory policy. 00:04:05.713 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.713 EAL: Restoring previous memory policy: 4 00:04:05.713 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.713 EAL: request: mp_malloc_sync 00:04:05.713 EAL: No shared files mode enabled, IPC is disabled 00:04:05.713 EAL: Heap on socket 0 was expanded by 130MB 00:04:05.973 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.973 EAL: request: mp_malloc_sync 00:04:05.973 EAL: No shared files mode enabled, IPC is disabled 00:04:05.973 EAL: Heap on socket 0 was shrunk by 130MB 00:04:06.232 EAL: Trying to obtain current memory policy. 00:04:06.232 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.232 EAL: Restoring previous memory policy: 4 00:04:06.232 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.232 EAL: request: mp_malloc_sync 00:04:06.232 EAL: No shared files mode enabled, IPC is disabled 00:04:06.232 EAL: Heap on socket 0 was expanded by 258MB 00:04:06.801 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.801 EAL: request: mp_malloc_sync 00:04:06.801 EAL: No shared files mode enabled, IPC is disabled 00:04:06.801 EAL: Heap on socket 0 was shrunk by 258MB 00:04:07.060 EAL: Trying to obtain current memory policy. 00:04:07.060 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.319 EAL: Restoring previous memory policy: 4 00:04:07.319 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.319 EAL: request: mp_malloc_sync 00:04:07.319 EAL: No shared files mode enabled, IPC is disabled 00:04:07.319 EAL: Heap on socket 0 was expanded by 514MB 00:04:08.258 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.258 EAL: request: mp_malloc_sync 00:04:08.258 EAL: No shared files mode enabled, IPC is disabled 00:04:08.258 EAL: Heap on socket 0 was shrunk by 514MB 00:04:09.199 EAL: Trying to obtain current memory policy. 00:04:09.199 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.199 EAL: Restoring previous memory policy: 4 00:04:09.199 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.199 EAL: request: mp_malloc_sync 00:04:09.199 EAL: No shared files mode enabled, IPC is disabled 00:04:09.199 EAL: Heap on socket 0 was expanded by 1026MB 00:04:11.104 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.104 EAL: request: mp_malloc_sync 00:04:11.104 EAL: No shared files mode enabled, IPC is disabled 00:04:11.104 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:13.006 passed 00:04:13.006 00:04:13.006 Run Summary: Type Total Ran Passed Failed Inactive 00:04:13.006 suites 1 1 n/a 0 0 00:04:13.006 tests 2 2 2 0 0 00:04:13.006 asserts 5754 5754 5754 0 n/a 00:04:13.006 00:04:13.006 Elapsed time = 7.926 seconds 00:04:13.006 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.006 EAL: request: mp_malloc_sync 00:04:13.006 EAL: No shared files mode enabled, IPC is disabled 00:04:13.006 EAL: Heap on socket 0 was shrunk by 2MB 00:04:13.006 EAL: No shared files mode enabled, IPC is disabled 00:04:13.006 EAL: No shared files mode enabled, IPC is disabled 00:04:13.006 EAL: No shared files mode enabled, IPC is disabled 00:04:13.006 00:04:13.006 real 0m8.249s 00:04:13.006 user 0m7.285s 00:04:13.006 sys 0m0.812s 00:04:13.006 09:41:38 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.006 09:41:38 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:13.006 ************************************ 00:04:13.006 END TEST env_vtophys 00:04:13.006 ************************************ 00:04:13.006 09:41:38 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:13.006 09:41:38 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.006 09:41:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.006 09:41:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.006 ************************************ 00:04:13.006 START TEST env_pci 00:04:13.006 ************************************ 00:04:13.006 09:41:38 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:13.006 00:04:13.006 00:04:13.006 CUnit - A unit testing framework for C - Version 2.1-3 00:04:13.006 http://cunit.sourceforge.net/ 00:04:13.006 00:04:13.006 00:04:13.006 Suite: pci 00:04:13.006 Test: pci_hook ...[2024-12-06 09:41:38.169493] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56783 has claimed it 00:04:13.006 passed 00:04:13.006 00:04:13.006 Run Summary: Type Total Ran Passed Failed Inactive 00:04:13.006 suites 1 1 n/a 0 0 00:04:13.006 tests 1 1 1 0 0 00:04:13.006 asserts 25 25 25 0 n/a 00:04:13.006 00:04:13.006 Elapsed time = 0.006 seconds 00:04:13.006 EAL: Cannot find device (10000:00:01.0) 00:04:13.006 EAL: Failed to attach device on primary process 00:04:13.006 00:04:13.006 real 0m0.095s 00:04:13.006 user 0m0.045s 00:04:13.006 sys 0m0.049s 00:04:13.006 09:41:38 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.006 09:41:38 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:13.006 ************************************ 00:04:13.006 END TEST env_pci 00:04:13.006 ************************************ 00:04:13.006 09:41:38 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:13.006 09:41:38 env -- env/env.sh@15 -- # uname 00:04:13.264 09:41:38 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:13.264 09:41:38 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:13.264 09:41:38 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:13.264 09:41:38 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:13.264 09:41:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.264 09:41:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.264 ************************************ 00:04:13.264 START TEST env_dpdk_post_init 00:04:13.264 ************************************ 00:04:13.264 09:41:38 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:13.264 EAL: Detected CPU lcores: 10 00:04:13.264 EAL: Detected NUMA nodes: 1 00:04:13.264 EAL: Detected shared linkage of DPDK 00:04:13.264 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:13.264 EAL: Selected IOVA mode 'PA' 00:04:13.264 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:13.555 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:13.555 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:13.555 Starting DPDK initialization... 00:04:13.555 Starting SPDK post initialization... 00:04:13.555 SPDK NVMe probe 00:04:13.555 Attaching to 0000:00:10.0 00:04:13.555 Attaching to 0000:00:11.0 00:04:13.555 Attached to 0000:00:10.0 00:04:13.555 Attached to 0000:00:11.0 00:04:13.555 Cleaning up... 00:04:13.555 00:04:13.555 real 0m0.295s 00:04:13.555 user 0m0.101s 00:04:13.555 sys 0m0.095s 00:04:13.555 09:41:38 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.555 09:41:38 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:13.555 ************************************ 00:04:13.556 END TEST env_dpdk_post_init 00:04:13.556 ************************************ 00:04:13.556 09:41:38 env -- env/env.sh@26 -- # uname 00:04:13.556 09:41:38 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:13.556 09:41:38 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:13.556 09:41:38 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.556 09:41:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.556 09:41:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.556 ************************************ 00:04:13.556 START TEST env_mem_callbacks 00:04:13.556 ************************************ 00:04:13.556 09:41:38 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:13.556 EAL: Detected CPU lcores: 10 00:04:13.556 EAL: Detected NUMA nodes: 1 00:04:13.556 EAL: Detected shared linkage of DPDK 00:04:13.556 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:13.556 EAL: Selected IOVA mode 'PA' 00:04:13.816 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:13.816 00:04:13.816 00:04:13.816 CUnit - A unit testing framework for C - Version 2.1-3 00:04:13.816 http://cunit.sourceforge.net/ 00:04:13.816 00:04:13.816 00:04:13.816 Suite: memory 00:04:13.816 Test: test ... 00:04:13.816 register 0x200000200000 2097152 00:04:13.816 malloc 3145728 00:04:13.816 register 0x200000400000 4194304 00:04:13.816 buf 0x2000004fffc0 len 3145728 PASSED 00:04:13.816 malloc 64 00:04:13.816 buf 0x2000004ffec0 len 64 PASSED 00:04:13.816 malloc 4194304 00:04:13.816 register 0x200000800000 6291456 00:04:13.816 buf 0x2000009fffc0 len 4194304 PASSED 00:04:13.816 free 0x2000004fffc0 3145728 00:04:13.816 free 0x2000004ffec0 64 00:04:13.816 unregister 0x200000400000 4194304 PASSED 00:04:13.816 free 0x2000009fffc0 4194304 00:04:13.816 unregister 0x200000800000 6291456 PASSED 00:04:13.816 malloc 8388608 00:04:13.816 register 0x200000400000 10485760 00:04:13.816 buf 0x2000005fffc0 len 8388608 PASSED 00:04:13.816 free 0x2000005fffc0 8388608 00:04:13.816 unregister 0x200000400000 10485760 PASSED 00:04:13.816 passed 00:04:13.816 00:04:13.816 Run Summary: Type Total Ran Passed Failed Inactive 00:04:13.816 suites 1 1 n/a 0 0 00:04:13.816 tests 1 1 1 0 0 00:04:13.816 asserts 15 15 15 0 n/a 00:04:13.816 00:04:13.816 Elapsed time = 0.084 seconds 00:04:13.816 00:04:13.816 real 0m0.295s 00:04:13.816 user 0m0.115s 00:04:13.816 sys 0m0.077s 00:04:13.816 09:41:38 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.816 09:41:38 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:13.816 ************************************ 00:04:13.816 END TEST env_mem_callbacks 00:04:13.816 ************************************ 00:04:13.816 00:04:13.816 real 0m9.806s 00:04:13.816 user 0m8.051s 00:04:13.816 sys 0m1.404s 00:04:13.816 09:41:39 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.816 09:41:39 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.816 ************************************ 00:04:13.816 END TEST env 00:04:13.816 ************************************ 00:04:13.816 09:41:39 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:13.816 09:41:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.816 09:41:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.816 09:41:39 -- common/autotest_common.sh@10 -- # set +x 00:04:13.816 ************************************ 00:04:13.816 START TEST rpc 00:04:13.816 ************************************ 00:04:13.816 09:41:39 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:14.077 * Looking for test storage... 00:04:14.077 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:14.077 09:41:39 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:14.077 09:41:39 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:14.077 09:41:39 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:14.077 09:41:39 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:14.077 09:41:39 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.077 09:41:39 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.077 09:41:39 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.077 09:41:39 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.077 09:41:39 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.077 09:41:39 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.077 09:41:39 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.077 09:41:39 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.077 09:41:39 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.077 09:41:39 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.077 09:41:39 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.077 09:41:39 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:14.077 09:41:39 rpc -- scripts/common.sh@345 -- # : 1 00:04:14.077 09:41:39 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.077 09:41:39 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.077 09:41:39 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:14.077 09:41:39 rpc -- scripts/common.sh@353 -- # local d=1 00:04:14.077 09:41:39 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.077 09:41:39 rpc -- scripts/common.sh@355 -- # echo 1 00:04:14.077 09:41:39 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.077 09:41:39 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:14.077 09:41:39 rpc -- scripts/common.sh@353 -- # local d=2 00:04:14.077 09:41:39 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.077 09:41:39 rpc -- scripts/common.sh@355 -- # echo 2 00:04:14.077 09:41:39 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.077 09:41:39 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.077 09:41:39 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.077 09:41:39 rpc -- scripts/common.sh@368 -- # return 0 00:04:14.077 09:41:39 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.077 09:41:39 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:14.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.077 --rc genhtml_branch_coverage=1 00:04:14.077 --rc genhtml_function_coverage=1 00:04:14.077 --rc genhtml_legend=1 00:04:14.077 --rc geninfo_all_blocks=1 00:04:14.077 --rc geninfo_unexecuted_blocks=1 00:04:14.077 00:04:14.077 ' 00:04:14.077 09:41:39 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:14.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.077 --rc genhtml_branch_coverage=1 00:04:14.077 --rc genhtml_function_coverage=1 00:04:14.077 --rc genhtml_legend=1 00:04:14.077 --rc geninfo_all_blocks=1 00:04:14.077 --rc geninfo_unexecuted_blocks=1 00:04:14.077 00:04:14.077 ' 00:04:14.077 09:41:39 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:14.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.077 --rc genhtml_branch_coverage=1 00:04:14.077 --rc genhtml_function_coverage=1 00:04:14.077 --rc genhtml_legend=1 00:04:14.077 --rc geninfo_all_blocks=1 00:04:14.077 --rc geninfo_unexecuted_blocks=1 00:04:14.077 00:04:14.077 ' 00:04:14.077 09:41:39 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:14.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.077 --rc genhtml_branch_coverage=1 00:04:14.077 --rc genhtml_function_coverage=1 00:04:14.077 --rc genhtml_legend=1 00:04:14.077 --rc geninfo_all_blocks=1 00:04:14.077 --rc geninfo_unexecuted_blocks=1 00:04:14.077 00:04:14.077 ' 00:04:14.077 09:41:39 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56910 00:04:14.077 09:41:39 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:14.077 09:41:39 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:14.077 09:41:39 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56910 00:04:14.077 09:41:39 rpc -- common/autotest_common.sh@835 -- # '[' -z 56910 ']' 00:04:14.077 09:41:39 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.077 09:41:39 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:14.077 09:41:39 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.077 09:41:39 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:14.077 09:41:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.337 [2024-12-06 09:41:39.386335] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:04:14.337 [2024-12-06 09:41:39.386466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56910 ] 00:04:14.337 [2024-12-06 09:41:39.564511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.597 [2024-12-06 09:41:39.682580] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:14.597 [2024-12-06 09:41:39.682649] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56910' to capture a snapshot of events at runtime. 00:04:14.597 [2024-12-06 09:41:39.682660] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:14.597 [2024-12-06 09:41:39.682671] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:14.597 [2024-12-06 09:41:39.682678] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56910 for offline analysis/debug. 00:04:14.597 [2024-12-06 09:41:39.684043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.538 09:41:40 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:15.538 09:41:40 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:15.538 09:41:40 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:15.538 09:41:40 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:15.538 09:41:40 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:15.538 09:41:40 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:15.538 09:41:40 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.538 09:41:40 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.538 09:41:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.538 ************************************ 00:04:15.538 START TEST rpc_integrity 00:04:15.538 ************************************ 00:04:15.538 09:41:40 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:15.538 09:41:40 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:15.538 09:41:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.538 09:41:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.538 09:41:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.538 09:41:40 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:15.538 09:41:40 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:15.538 09:41:40 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:15.538 09:41:40 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:15.538 09:41:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.538 09:41:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.538 09:41:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.538 09:41:40 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:15.538 09:41:40 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:15.538 09:41:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.538 09:41:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.538 09:41:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.538 09:41:40 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:15.538 { 00:04:15.538 "name": "Malloc0", 00:04:15.538 "aliases": [ 00:04:15.538 "511a8521-dcd4-403c-995e-1893b2446e97" 00:04:15.538 ], 00:04:15.538 "product_name": "Malloc disk", 00:04:15.538 "block_size": 512, 00:04:15.538 "num_blocks": 16384, 00:04:15.538 "uuid": "511a8521-dcd4-403c-995e-1893b2446e97", 00:04:15.538 "assigned_rate_limits": { 00:04:15.538 "rw_ios_per_sec": 0, 00:04:15.538 "rw_mbytes_per_sec": 0, 00:04:15.538 "r_mbytes_per_sec": 0, 00:04:15.538 "w_mbytes_per_sec": 0 00:04:15.538 }, 00:04:15.538 "claimed": false, 00:04:15.538 "zoned": false, 00:04:15.538 "supported_io_types": { 00:04:15.538 "read": true, 00:04:15.538 "write": true, 00:04:15.538 "unmap": true, 00:04:15.538 "flush": true, 00:04:15.538 "reset": true, 00:04:15.538 "nvme_admin": false, 00:04:15.538 "nvme_io": false, 00:04:15.538 "nvme_io_md": false, 00:04:15.538 "write_zeroes": true, 00:04:15.538 "zcopy": true, 00:04:15.538 "get_zone_info": false, 00:04:15.538 "zone_management": false, 00:04:15.538 "zone_append": false, 00:04:15.538 "compare": false, 00:04:15.538 "compare_and_write": false, 00:04:15.538 "abort": true, 00:04:15.538 "seek_hole": false, 00:04:15.538 "seek_data": false, 00:04:15.538 "copy": true, 00:04:15.538 "nvme_iov_md": false 00:04:15.538 }, 00:04:15.538 "memory_domains": [ 00:04:15.538 { 00:04:15.538 "dma_device_id": "system", 00:04:15.538 "dma_device_type": 1 00:04:15.538 }, 00:04:15.538 { 00:04:15.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.538 "dma_device_type": 2 00:04:15.538 } 00:04:15.538 ], 00:04:15.538 "driver_specific": {} 00:04:15.538 } 00:04:15.538 ]' 00:04:15.538 09:41:40 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:15.538 09:41:40 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:15.538 09:41:40 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:15.538 09:41:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.538 09:41:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.538 [2024-12-06 09:41:40.752657] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:15.538 [2024-12-06 09:41:40.752751] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:15.538 [2024-12-06 09:41:40.752784] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:15.538 [2024-12-06 09:41:40.752801] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:15.538 [2024-12-06 09:41:40.755201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:15.538 [2024-12-06 09:41:40.755252] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:15.538 Passthru0 00:04:15.538 09:41:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.538 09:41:40 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:15.538 09:41:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.538 09:41:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.538 09:41:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.538 09:41:40 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:15.538 { 00:04:15.538 "name": "Malloc0", 00:04:15.538 "aliases": [ 00:04:15.538 "511a8521-dcd4-403c-995e-1893b2446e97" 00:04:15.538 ], 00:04:15.538 "product_name": "Malloc disk", 00:04:15.538 "block_size": 512, 00:04:15.538 "num_blocks": 16384, 00:04:15.539 "uuid": "511a8521-dcd4-403c-995e-1893b2446e97", 00:04:15.539 "assigned_rate_limits": { 00:04:15.539 "rw_ios_per_sec": 0, 00:04:15.539 "rw_mbytes_per_sec": 0, 00:04:15.539 "r_mbytes_per_sec": 0, 00:04:15.539 "w_mbytes_per_sec": 0 00:04:15.539 }, 00:04:15.539 "claimed": true, 00:04:15.539 "claim_type": "exclusive_write", 00:04:15.539 "zoned": false, 00:04:15.539 "supported_io_types": { 00:04:15.539 "read": true, 00:04:15.539 "write": true, 00:04:15.539 "unmap": true, 00:04:15.539 "flush": true, 00:04:15.539 "reset": true, 00:04:15.539 "nvme_admin": false, 00:04:15.539 "nvme_io": false, 00:04:15.539 "nvme_io_md": false, 00:04:15.539 "write_zeroes": true, 00:04:15.539 "zcopy": true, 00:04:15.539 "get_zone_info": false, 00:04:15.539 "zone_management": false, 00:04:15.539 "zone_append": false, 00:04:15.539 "compare": false, 00:04:15.539 "compare_and_write": false, 00:04:15.539 "abort": true, 00:04:15.539 "seek_hole": false, 00:04:15.539 "seek_data": false, 00:04:15.539 "copy": true, 00:04:15.539 "nvme_iov_md": false 00:04:15.539 }, 00:04:15.539 "memory_domains": [ 00:04:15.539 { 00:04:15.539 "dma_device_id": "system", 00:04:15.539 "dma_device_type": 1 00:04:15.539 }, 00:04:15.539 { 00:04:15.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.539 "dma_device_type": 2 00:04:15.539 } 00:04:15.539 ], 00:04:15.539 "driver_specific": {} 00:04:15.539 }, 00:04:15.539 { 00:04:15.539 "name": "Passthru0", 00:04:15.539 "aliases": [ 00:04:15.539 "9fd4b500-1007-5830-a31a-0fc6181ecf7a" 00:04:15.539 ], 00:04:15.539 "product_name": "passthru", 00:04:15.539 "block_size": 512, 00:04:15.539 "num_blocks": 16384, 00:04:15.539 "uuid": "9fd4b500-1007-5830-a31a-0fc6181ecf7a", 00:04:15.539 "assigned_rate_limits": { 00:04:15.539 "rw_ios_per_sec": 0, 00:04:15.539 "rw_mbytes_per_sec": 0, 00:04:15.539 "r_mbytes_per_sec": 0, 00:04:15.539 "w_mbytes_per_sec": 0 00:04:15.539 }, 00:04:15.539 "claimed": false, 00:04:15.539 "zoned": false, 00:04:15.539 "supported_io_types": { 00:04:15.539 "read": true, 00:04:15.539 "write": true, 00:04:15.539 "unmap": true, 00:04:15.539 "flush": true, 00:04:15.539 "reset": true, 00:04:15.539 "nvme_admin": false, 00:04:15.539 "nvme_io": false, 00:04:15.539 "nvme_io_md": false, 00:04:15.539 "write_zeroes": true, 00:04:15.539 "zcopy": true, 00:04:15.539 "get_zone_info": false, 00:04:15.539 "zone_management": false, 00:04:15.539 "zone_append": false, 00:04:15.539 "compare": false, 00:04:15.539 "compare_and_write": false, 00:04:15.539 "abort": true, 00:04:15.539 "seek_hole": false, 00:04:15.539 "seek_data": false, 00:04:15.539 "copy": true, 00:04:15.539 "nvme_iov_md": false 00:04:15.539 }, 00:04:15.539 "memory_domains": [ 00:04:15.539 { 00:04:15.539 "dma_device_id": "system", 00:04:15.539 "dma_device_type": 1 00:04:15.539 }, 00:04:15.539 { 00:04:15.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.539 "dma_device_type": 2 00:04:15.539 } 00:04:15.539 ], 00:04:15.539 "driver_specific": { 00:04:15.539 "passthru": { 00:04:15.539 "name": "Passthru0", 00:04:15.539 "base_bdev_name": "Malloc0" 00:04:15.539 } 00:04:15.539 } 00:04:15.539 } 00:04:15.539 ]' 00:04:15.539 09:41:40 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:15.799 09:41:40 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:15.799 09:41:40 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:15.799 09:41:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.799 09:41:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.799 09:41:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.799 09:41:40 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:15.799 09:41:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.799 09:41:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.800 09:41:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.800 09:41:40 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:15.800 09:41:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.800 09:41:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.800 09:41:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.800 09:41:40 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:15.800 09:41:40 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:15.800 ************************************ 00:04:15.800 END TEST rpc_integrity 00:04:15.800 ************************************ 00:04:15.800 09:41:40 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:15.800 00:04:15.800 real 0m0.339s 00:04:15.800 user 0m0.180s 00:04:15.800 sys 0m0.052s 00:04:15.800 09:41:40 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.800 09:41:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.800 09:41:40 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:15.800 09:41:40 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.800 09:41:40 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.800 09:41:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.800 ************************************ 00:04:15.800 START TEST rpc_plugins 00:04:15.800 ************************************ 00:04:15.800 09:41:41 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:15.800 09:41:41 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:15.800 09:41:41 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.800 09:41:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.800 09:41:41 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.800 09:41:41 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:15.800 09:41:41 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:15.800 09:41:41 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.800 09:41:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.800 09:41:41 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.800 09:41:41 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:15.800 { 00:04:15.800 "name": "Malloc1", 00:04:15.800 "aliases": [ 00:04:15.800 "7042ab4b-3a77-4419-ab0f-79bb7b1c6821" 00:04:15.800 ], 00:04:15.800 "product_name": "Malloc disk", 00:04:15.800 "block_size": 4096, 00:04:15.800 "num_blocks": 256, 00:04:15.800 "uuid": "7042ab4b-3a77-4419-ab0f-79bb7b1c6821", 00:04:15.800 "assigned_rate_limits": { 00:04:15.800 "rw_ios_per_sec": 0, 00:04:15.800 "rw_mbytes_per_sec": 0, 00:04:15.800 "r_mbytes_per_sec": 0, 00:04:15.800 "w_mbytes_per_sec": 0 00:04:15.800 }, 00:04:15.800 "claimed": false, 00:04:15.800 "zoned": false, 00:04:15.800 "supported_io_types": { 00:04:15.800 "read": true, 00:04:15.800 "write": true, 00:04:15.800 "unmap": true, 00:04:15.800 "flush": true, 00:04:15.800 "reset": true, 00:04:15.800 "nvme_admin": false, 00:04:15.800 "nvme_io": false, 00:04:15.800 "nvme_io_md": false, 00:04:15.800 "write_zeroes": true, 00:04:15.800 "zcopy": true, 00:04:15.800 "get_zone_info": false, 00:04:15.800 "zone_management": false, 00:04:15.800 "zone_append": false, 00:04:15.800 "compare": false, 00:04:15.800 "compare_and_write": false, 00:04:15.800 "abort": true, 00:04:15.800 "seek_hole": false, 00:04:15.800 "seek_data": false, 00:04:15.800 "copy": true, 00:04:15.800 "nvme_iov_md": false 00:04:15.800 }, 00:04:15.800 "memory_domains": [ 00:04:15.800 { 00:04:15.800 "dma_device_id": "system", 00:04:15.800 "dma_device_type": 1 00:04:15.800 }, 00:04:15.800 { 00:04:15.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.800 "dma_device_type": 2 00:04:15.800 } 00:04:15.800 ], 00:04:15.800 "driver_specific": {} 00:04:15.800 } 00:04:15.800 ]' 00:04:15.800 09:41:41 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:16.060 09:41:41 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:16.060 09:41:41 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:16.060 09:41:41 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.060 09:41:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.060 09:41:41 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.060 09:41:41 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:16.060 09:41:41 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.060 09:41:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.060 09:41:41 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.060 09:41:41 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:16.060 09:41:41 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:16.060 ************************************ 00:04:16.060 END TEST rpc_plugins 00:04:16.060 ************************************ 00:04:16.060 09:41:41 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:16.060 00:04:16.060 real 0m0.165s 00:04:16.060 user 0m0.089s 00:04:16.060 sys 0m0.028s 00:04:16.060 09:41:41 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.060 09:41:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.060 09:41:41 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:16.060 09:41:41 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.060 09:41:41 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.060 09:41:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.060 ************************************ 00:04:16.060 START TEST rpc_trace_cmd_test 00:04:16.060 ************************************ 00:04:16.060 09:41:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:16.060 09:41:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:16.060 09:41:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:16.060 09:41:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.060 09:41:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:16.060 09:41:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.060 09:41:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:16.060 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56910", 00:04:16.060 "tpoint_group_mask": "0x8", 00:04:16.060 "iscsi_conn": { 00:04:16.060 "mask": "0x2", 00:04:16.060 "tpoint_mask": "0x0" 00:04:16.060 }, 00:04:16.060 "scsi": { 00:04:16.060 "mask": "0x4", 00:04:16.060 "tpoint_mask": "0x0" 00:04:16.060 }, 00:04:16.060 "bdev": { 00:04:16.060 "mask": "0x8", 00:04:16.060 "tpoint_mask": "0xffffffffffffffff" 00:04:16.060 }, 00:04:16.060 "nvmf_rdma": { 00:04:16.060 "mask": "0x10", 00:04:16.060 "tpoint_mask": "0x0" 00:04:16.060 }, 00:04:16.060 "nvmf_tcp": { 00:04:16.060 "mask": "0x20", 00:04:16.060 "tpoint_mask": "0x0" 00:04:16.060 }, 00:04:16.060 "ftl": { 00:04:16.060 "mask": "0x40", 00:04:16.060 "tpoint_mask": "0x0" 00:04:16.060 }, 00:04:16.060 "blobfs": { 00:04:16.060 "mask": "0x80", 00:04:16.060 "tpoint_mask": "0x0" 00:04:16.060 }, 00:04:16.060 "dsa": { 00:04:16.060 "mask": "0x200", 00:04:16.060 "tpoint_mask": "0x0" 00:04:16.060 }, 00:04:16.060 "thread": { 00:04:16.060 "mask": "0x400", 00:04:16.060 "tpoint_mask": "0x0" 00:04:16.060 }, 00:04:16.060 "nvme_pcie": { 00:04:16.060 "mask": "0x800", 00:04:16.060 "tpoint_mask": "0x0" 00:04:16.060 }, 00:04:16.060 "iaa": { 00:04:16.060 "mask": "0x1000", 00:04:16.060 "tpoint_mask": "0x0" 00:04:16.060 }, 00:04:16.060 "nvme_tcp": { 00:04:16.060 "mask": "0x2000", 00:04:16.060 "tpoint_mask": "0x0" 00:04:16.060 }, 00:04:16.060 "bdev_nvme": { 00:04:16.060 "mask": "0x4000", 00:04:16.060 "tpoint_mask": "0x0" 00:04:16.060 }, 00:04:16.060 "sock": { 00:04:16.060 "mask": "0x8000", 00:04:16.060 "tpoint_mask": "0x0" 00:04:16.060 }, 00:04:16.060 "blob": { 00:04:16.060 "mask": "0x10000", 00:04:16.060 "tpoint_mask": "0x0" 00:04:16.060 }, 00:04:16.060 "bdev_raid": { 00:04:16.060 "mask": "0x20000", 00:04:16.060 "tpoint_mask": "0x0" 00:04:16.060 }, 00:04:16.060 "scheduler": { 00:04:16.060 "mask": "0x40000", 00:04:16.060 "tpoint_mask": "0x0" 00:04:16.060 } 00:04:16.060 }' 00:04:16.060 09:41:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:16.060 09:41:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:16.060 09:41:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:16.320 09:41:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:16.320 09:41:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:16.320 09:41:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:16.320 09:41:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:16.320 09:41:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:16.320 09:41:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:16.320 ************************************ 00:04:16.320 END TEST rpc_trace_cmd_test 00:04:16.320 ************************************ 00:04:16.320 09:41:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:16.320 00:04:16.320 real 0m0.267s 00:04:16.320 user 0m0.219s 00:04:16.320 sys 0m0.038s 00:04:16.320 09:41:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.320 09:41:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:16.320 09:41:41 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:16.320 09:41:41 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:16.320 09:41:41 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:16.320 09:41:41 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.320 09:41:41 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.320 09:41:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.320 ************************************ 00:04:16.320 START TEST rpc_daemon_integrity 00:04:16.320 ************************************ 00:04:16.320 09:41:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:16.320 09:41:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:16.320 09:41:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.320 09:41:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.320 09:41:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.320 09:41:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:16.320 09:41:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:16.583 09:41:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:16.583 09:41:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:16.583 09:41:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.583 09:41:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.583 09:41:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.583 09:41:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:16.583 09:41:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:16.583 09:41:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.583 09:41:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.583 09:41:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.583 09:41:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:16.583 { 00:04:16.583 "name": "Malloc2", 00:04:16.583 "aliases": [ 00:04:16.583 "8193c7c7-089e-4fec-afe8-f4fe3f847c3d" 00:04:16.583 ], 00:04:16.583 "product_name": "Malloc disk", 00:04:16.583 "block_size": 512, 00:04:16.583 "num_blocks": 16384, 00:04:16.583 "uuid": "8193c7c7-089e-4fec-afe8-f4fe3f847c3d", 00:04:16.583 "assigned_rate_limits": { 00:04:16.583 "rw_ios_per_sec": 0, 00:04:16.583 "rw_mbytes_per_sec": 0, 00:04:16.583 "r_mbytes_per_sec": 0, 00:04:16.583 "w_mbytes_per_sec": 0 00:04:16.583 }, 00:04:16.583 "claimed": false, 00:04:16.583 "zoned": false, 00:04:16.583 "supported_io_types": { 00:04:16.583 "read": true, 00:04:16.583 "write": true, 00:04:16.583 "unmap": true, 00:04:16.583 "flush": true, 00:04:16.583 "reset": true, 00:04:16.583 "nvme_admin": false, 00:04:16.583 "nvme_io": false, 00:04:16.583 "nvme_io_md": false, 00:04:16.583 "write_zeroes": true, 00:04:16.583 "zcopy": true, 00:04:16.583 "get_zone_info": false, 00:04:16.583 "zone_management": false, 00:04:16.583 "zone_append": false, 00:04:16.583 "compare": false, 00:04:16.583 "compare_and_write": false, 00:04:16.583 "abort": true, 00:04:16.583 "seek_hole": false, 00:04:16.583 "seek_data": false, 00:04:16.583 "copy": true, 00:04:16.583 "nvme_iov_md": false 00:04:16.583 }, 00:04:16.583 "memory_domains": [ 00:04:16.583 { 00:04:16.583 "dma_device_id": "system", 00:04:16.583 "dma_device_type": 1 00:04:16.583 }, 00:04:16.583 { 00:04:16.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.583 "dma_device_type": 2 00:04:16.583 } 00:04:16.583 ], 00:04:16.583 "driver_specific": {} 00:04:16.583 } 00:04:16.583 ]' 00:04:16.583 09:41:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:16.583 09:41:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:16.583 09:41:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:16.583 09:41:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.583 09:41:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.583 [2024-12-06 09:41:41.722346] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:16.583 [2024-12-06 09:41:41.722424] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:16.583 [2024-12-06 09:41:41.722448] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:16.583 [2024-12-06 09:41:41.722460] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:16.583 [2024-12-06 09:41:41.724747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:16.583 [2024-12-06 09:41:41.724797] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:16.583 Passthru0 00:04:16.583 09:41:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.583 09:41:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:16.583 09:41:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.583 09:41:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.583 09:41:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.583 09:41:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:16.583 { 00:04:16.583 "name": "Malloc2", 00:04:16.583 "aliases": [ 00:04:16.583 "8193c7c7-089e-4fec-afe8-f4fe3f847c3d" 00:04:16.583 ], 00:04:16.583 "product_name": "Malloc disk", 00:04:16.583 "block_size": 512, 00:04:16.583 "num_blocks": 16384, 00:04:16.583 "uuid": "8193c7c7-089e-4fec-afe8-f4fe3f847c3d", 00:04:16.583 "assigned_rate_limits": { 00:04:16.583 "rw_ios_per_sec": 0, 00:04:16.583 "rw_mbytes_per_sec": 0, 00:04:16.583 "r_mbytes_per_sec": 0, 00:04:16.583 "w_mbytes_per_sec": 0 00:04:16.583 }, 00:04:16.583 "claimed": true, 00:04:16.583 "claim_type": "exclusive_write", 00:04:16.583 "zoned": false, 00:04:16.583 "supported_io_types": { 00:04:16.583 "read": true, 00:04:16.583 "write": true, 00:04:16.583 "unmap": true, 00:04:16.583 "flush": true, 00:04:16.584 "reset": true, 00:04:16.584 "nvme_admin": false, 00:04:16.584 "nvme_io": false, 00:04:16.584 "nvme_io_md": false, 00:04:16.584 "write_zeroes": true, 00:04:16.584 "zcopy": true, 00:04:16.584 "get_zone_info": false, 00:04:16.584 "zone_management": false, 00:04:16.584 "zone_append": false, 00:04:16.584 "compare": false, 00:04:16.584 "compare_and_write": false, 00:04:16.584 "abort": true, 00:04:16.584 "seek_hole": false, 00:04:16.584 "seek_data": false, 00:04:16.584 "copy": true, 00:04:16.584 "nvme_iov_md": false 00:04:16.584 }, 00:04:16.584 "memory_domains": [ 00:04:16.584 { 00:04:16.584 "dma_device_id": "system", 00:04:16.584 "dma_device_type": 1 00:04:16.584 }, 00:04:16.584 { 00:04:16.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.584 "dma_device_type": 2 00:04:16.584 } 00:04:16.584 ], 00:04:16.584 "driver_specific": {} 00:04:16.584 }, 00:04:16.584 { 00:04:16.584 "name": "Passthru0", 00:04:16.584 "aliases": [ 00:04:16.584 "d4ad14d8-b996-56b5-8e84-01c43179bdab" 00:04:16.584 ], 00:04:16.584 "product_name": "passthru", 00:04:16.584 "block_size": 512, 00:04:16.584 "num_blocks": 16384, 00:04:16.584 "uuid": "d4ad14d8-b996-56b5-8e84-01c43179bdab", 00:04:16.584 "assigned_rate_limits": { 00:04:16.584 "rw_ios_per_sec": 0, 00:04:16.584 "rw_mbytes_per_sec": 0, 00:04:16.584 "r_mbytes_per_sec": 0, 00:04:16.584 "w_mbytes_per_sec": 0 00:04:16.584 }, 00:04:16.584 "claimed": false, 00:04:16.584 "zoned": false, 00:04:16.584 "supported_io_types": { 00:04:16.584 "read": true, 00:04:16.584 "write": true, 00:04:16.584 "unmap": true, 00:04:16.584 "flush": true, 00:04:16.584 "reset": true, 00:04:16.584 "nvme_admin": false, 00:04:16.584 "nvme_io": false, 00:04:16.584 "nvme_io_md": false, 00:04:16.584 "write_zeroes": true, 00:04:16.584 "zcopy": true, 00:04:16.584 "get_zone_info": false, 00:04:16.584 "zone_management": false, 00:04:16.584 "zone_append": false, 00:04:16.584 "compare": false, 00:04:16.584 "compare_and_write": false, 00:04:16.584 "abort": true, 00:04:16.584 "seek_hole": false, 00:04:16.584 "seek_data": false, 00:04:16.584 "copy": true, 00:04:16.584 "nvme_iov_md": false 00:04:16.584 }, 00:04:16.584 "memory_domains": [ 00:04:16.584 { 00:04:16.584 "dma_device_id": "system", 00:04:16.584 "dma_device_type": 1 00:04:16.584 }, 00:04:16.584 { 00:04:16.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.584 "dma_device_type": 2 00:04:16.584 } 00:04:16.584 ], 00:04:16.584 "driver_specific": { 00:04:16.584 "passthru": { 00:04:16.584 "name": "Passthru0", 00:04:16.584 "base_bdev_name": "Malloc2" 00:04:16.584 } 00:04:16.584 } 00:04:16.584 } 00:04:16.584 ]' 00:04:16.584 09:41:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:16.584 09:41:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:16.584 09:41:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:16.584 09:41:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.584 09:41:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.584 09:41:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.584 09:41:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:16.584 09:41:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.584 09:41:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.584 09:41:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.584 09:41:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:16.584 09:41:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.584 09:41:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.584 09:41:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.584 09:41:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:16.584 09:41:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:16.844 ************************************ 00:04:16.844 END TEST rpc_daemon_integrity 00:04:16.844 ************************************ 00:04:16.844 09:41:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:16.844 00:04:16.844 real 0m0.337s 00:04:16.844 user 0m0.186s 00:04:16.844 sys 0m0.050s 00:04:16.844 09:41:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.844 09:41:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.844 09:41:41 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:16.844 09:41:41 rpc -- rpc/rpc.sh@84 -- # killprocess 56910 00:04:16.844 09:41:41 rpc -- common/autotest_common.sh@954 -- # '[' -z 56910 ']' 00:04:16.844 09:41:41 rpc -- common/autotest_common.sh@958 -- # kill -0 56910 00:04:16.844 09:41:41 rpc -- common/autotest_common.sh@959 -- # uname 00:04:16.844 09:41:41 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:16.844 09:41:41 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56910 00:04:16.844 killing process with pid 56910 00:04:16.844 09:41:41 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:16.844 09:41:41 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:16.844 09:41:41 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56910' 00:04:16.844 09:41:41 rpc -- common/autotest_common.sh@973 -- # kill 56910 00:04:16.844 09:41:41 rpc -- common/autotest_common.sh@978 -- # wait 56910 00:04:19.392 00:04:19.392 real 0m5.328s 00:04:19.392 user 0m5.873s 00:04:19.392 sys 0m0.917s 00:04:19.392 09:41:44 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.392 09:41:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.392 ************************************ 00:04:19.392 END TEST rpc 00:04:19.392 ************************************ 00:04:19.392 09:41:44 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:19.392 09:41:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.392 09:41:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.392 09:41:44 -- common/autotest_common.sh@10 -- # set +x 00:04:19.392 ************************************ 00:04:19.392 START TEST skip_rpc 00:04:19.392 ************************************ 00:04:19.392 09:41:44 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:19.392 * Looking for test storage... 00:04:19.392 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:19.392 09:41:44 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:19.392 09:41:44 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:19.392 09:41:44 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:19.392 09:41:44 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:19.392 09:41:44 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.392 09:41:44 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.392 09:41:44 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.392 09:41:44 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.392 09:41:44 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.392 09:41:44 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.392 09:41:44 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.392 09:41:44 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.392 09:41:44 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.392 09:41:44 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.392 09:41:44 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.392 09:41:44 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:19.392 09:41:44 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:19.392 09:41:44 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.392 09:41:44 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.392 09:41:44 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:19.392 09:41:44 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:19.392 09:41:44 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.392 09:41:44 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:19.392 09:41:44 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.393 09:41:44 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:19.393 09:41:44 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:19.393 09:41:44 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.393 09:41:44 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:19.393 09:41:44 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.393 09:41:44 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.393 09:41:44 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.393 09:41:44 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:19.393 09:41:44 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.393 09:41:44 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:19.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.393 --rc genhtml_branch_coverage=1 00:04:19.393 --rc genhtml_function_coverage=1 00:04:19.393 --rc genhtml_legend=1 00:04:19.393 --rc geninfo_all_blocks=1 00:04:19.393 --rc geninfo_unexecuted_blocks=1 00:04:19.393 00:04:19.393 ' 00:04:19.393 09:41:44 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:19.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.393 --rc genhtml_branch_coverage=1 00:04:19.393 --rc genhtml_function_coverage=1 00:04:19.393 --rc genhtml_legend=1 00:04:19.393 --rc geninfo_all_blocks=1 00:04:19.393 --rc geninfo_unexecuted_blocks=1 00:04:19.393 00:04:19.393 ' 00:04:19.393 09:41:44 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:19.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.393 --rc genhtml_branch_coverage=1 00:04:19.393 --rc genhtml_function_coverage=1 00:04:19.393 --rc genhtml_legend=1 00:04:19.393 --rc geninfo_all_blocks=1 00:04:19.393 --rc geninfo_unexecuted_blocks=1 00:04:19.393 00:04:19.393 ' 00:04:19.393 09:41:44 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:19.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.393 --rc genhtml_branch_coverage=1 00:04:19.393 --rc genhtml_function_coverage=1 00:04:19.393 --rc genhtml_legend=1 00:04:19.393 --rc geninfo_all_blocks=1 00:04:19.393 --rc geninfo_unexecuted_blocks=1 00:04:19.393 00:04:19.393 ' 00:04:19.651 09:41:44 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:19.651 09:41:44 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:19.651 09:41:44 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:19.651 09:41:44 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.651 09:41:44 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.651 09:41:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.651 ************************************ 00:04:19.651 START TEST skip_rpc 00:04:19.651 ************************************ 00:04:19.651 09:41:44 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:19.651 09:41:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57139 00:04:19.651 09:41:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:19.651 09:41:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:19.651 09:41:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:19.651 [2024-12-06 09:41:44.779854] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:04:19.651 [2024-12-06 09:41:44.779981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57139 ] 00:04:19.910 [2024-12-06 09:41:44.958682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.910 [2024-12-06 09:41:45.079383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.178 09:41:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:25.178 09:41:49 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:25.178 09:41:49 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:25.178 09:41:49 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:25.178 09:41:49 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:25.178 09:41:49 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:25.178 09:41:49 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:25.178 09:41:49 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:25.178 09:41:49 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.178 09:41:49 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.178 09:41:49 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:25.178 09:41:49 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:25.178 09:41:49 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:25.178 09:41:49 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:25.178 09:41:49 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:25.178 09:41:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:25.178 09:41:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57139 00:04:25.178 09:41:49 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57139 ']' 00:04:25.178 09:41:49 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57139 00:04:25.178 09:41:49 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:25.178 09:41:49 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:25.178 09:41:49 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57139 00:04:25.178 09:41:49 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:25.178 09:41:49 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:25.178 killing process with pid 57139 00:04:25.178 09:41:49 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57139' 00:04:25.178 09:41:49 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57139 00:04:25.178 09:41:49 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57139 00:04:27.081 00:04:27.081 real 0m7.562s 00:04:27.081 user 0m7.088s 00:04:27.081 sys 0m0.386s 00:04:27.081 09:41:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.081 ************************************ 00:04:27.081 END TEST skip_rpc 00:04:27.081 ************************************ 00:04:27.081 09:41:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.081 09:41:52 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:27.081 09:41:52 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.081 09:41:52 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.081 09:41:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.081 ************************************ 00:04:27.081 START TEST skip_rpc_with_json 00:04:27.081 ************************************ 00:04:27.081 09:41:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:27.081 09:41:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:27.081 09:41:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57254 00:04:27.081 09:41:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:27.081 09:41:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.081 09:41:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57254 00:04:27.081 09:41:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57254 ']' 00:04:27.081 09:41:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.081 09:41:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:27.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.081 09:41:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.081 09:41:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:27.081 09:41:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:27.340 [2024-12-06 09:41:52.406975] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:04:27.340 [2024-12-06 09:41:52.407111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57254 ] 00:04:27.340 [2024-12-06 09:41:52.586202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.599 [2024-12-06 09:41:52.711571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.535 09:41:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.535 09:41:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:28.535 09:41:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:28.535 09:41:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.535 09:41:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:28.535 [2024-12-06 09:41:53.655819] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:28.535 request: 00:04:28.535 { 00:04:28.535 "trtype": "tcp", 00:04:28.535 "method": "nvmf_get_transports", 00:04:28.535 "req_id": 1 00:04:28.535 } 00:04:28.535 Got JSON-RPC error response 00:04:28.535 response: 00:04:28.535 { 00:04:28.535 "code": -19, 00:04:28.535 "message": "No such device" 00:04:28.535 } 00:04:28.535 09:41:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:28.535 09:41:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:28.535 09:41:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.535 09:41:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:28.535 [2024-12-06 09:41:53.671927] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:28.535 09:41:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.535 09:41:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:28.535 09:41:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.535 09:41:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:28.794 09:41:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.794 09:41:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:28.794 { 00:04:28.794 "subsystems": [ 00:04:28.794 { 00:04:28.794 "subsystem": "fsdev", 00:04:28.794 "config": [ 00:04:28.794 { 00:04:28.794 "method": "fsdev_set_opts", 00:04:28.794 "params": { 00:04:28.794 "fsdev_io_pool_size": 65535, 00:04:28.794 "fsdev_io_cache_size": 256 00:04:28.794 } 00:04:28.794 } 00:04:28.794 ] 00:04:28.794 }, 00:04:28.794 { 00:04:28.794 "subsystem": "keyring", 00:04:28.794 "config": [] 00:04:28.794 }, 00:04:28.794 { 00:04:28.794 "subsystem": "iobuf", 00:04:28.794 "config": [ 00:04:28.794 { 00:04:28.794 "method": "iobuf_set_options", 00:04:28.794 "params": { 00:04:28.794 "small_pool_count": 8192, 00:04:28.794 "large_pool_count": 1024, 00:04:28.794 "small_bufsize": 8192, 00:04:28.794 "large_bufsize": 135168, 00:04:28.794 "enable_numa": false 00:04:28.794 } 00:04:28.794 } 00:04:28.794 ] 00:04:28.794 }, 00:04:28.794 { 00:04:28.794 "subsystem": "sock", 00:04:28.794 "config": [ 00:04:28.794 { 00:04:28.794 "method": "sock_set_default_impl", 00:04:28.794 "params": { 00:04:28.794 "impl_name": "posix" 00:04:28.794 } 00:04:28.794 }, 00:04:28.794 { 00:04:28.794 "method": "sock_impl_set_options", 00:04:28.794 "params": { 00:04:28.794 "impl_name": "ssl", 00:04:28.794 "recv_buf_size": 4096, 00:04:28.794 "send_buf_size": 4096, 00:04:28.794 "enable_recv_pipe": true, 00:04:28.794 "enable_quickack": false, 00:04:28.794 "enable_placement_id": 0, 00:04:28.794 "enable_zerocopy_send_server": true, 00:04:28.794 "enable_zerocopy_send_client": false, 00:04:28.794 "zerocopy_threshold": 0, 00:04:28.794 "tls_version": 0, 00:04:28.794 "enable_ktls": false 00:04:28.794 } 00:04:28.794 }, 00:04:28.794 { 00:04:28.794 "method": "sock_impl_set_options", 00:04:28.794 "params": { 00:04:28.794 "impl_name": "posix", 00:04:28.794 "recv_buf_size": 2097152, 00:04:28.794 "send_buf_size": 2097152, 00:04:28.794 "enable_recv_pipe": true, 00:04:28.794 "enable_quickack": false, 00:04:28.794 "enable_placement_id": 0, 00:04:28.794 "enable_zerocopy_send_server": true, 00:04:28.794 "enable_zerocopy_send_client": false, 00:04:28.794 "zerocopy_threshold": 0, 00:04:28.794 "tls_version": 0, 00:04:28.794 "enable_ktls": false 00:04:28.794 } 00:04:28.794 } 00:04:28.794 ] 00:04:28.794 }, 00:04:28.794 { 00:04:28.794 "subsystem": "vmd", 00:04:28.794 "config": [] 00:04:28.794 }, 00:04:28.794 { 00:04:28.794 "subsystem": "accel", 00:04:28.794 "config": [ 00:04:28.794 { 00:04:28.794 "method": "accel_set_options", 00:04:28.794 "params": { 00:04:28.794 "small_cache_size": 128, 00:04:28.794 "large_cache_size": 16, 00:04:28.794 "task_count": 2048, 00:04:28.794 "sequence_count": 2048, 00:04:28.794 "buf_count": 2048 00:04:28.794 } 00:04:28.794 } 00:04:28.794 ] 00:04:28.794 }, 00:04:28.794 { 00:04:28.794 "subsystem": "bdev", 00:04:28.794 "config": [ 00:04:28.794 { 00:04:28.794 "method": "bdev_set_options", 00:04:28.794 "params": { 00:04:28.794 "bdev_io_pool_size": 65535, 00:04:28.794 "bdev_io_cache_size": 256, 00:04:28.794 "bdev_auto_examine": true, 00:04:28.794 "iobuf_small_cache_size": 128, 00:04:28.794 "iobuf_large_cache_size": 16 00:04:28.794 } 00:04:28.794 }, 00:04:28.794 { 00:04:28.794 "method": "bdev_raid_set_options", 00:04:28.794 "params": { 00:04:28.794 "process_window_size_kb": 1024, 00:04:28.794 "process_max_bandwidth_mb_sec": 0 00:04:28.794 } 00:04:28.794 }, 00:04:28.794 { 00:04:28.794 "method": "bdev_iscsi_set_options", 00:04:28.794 "params": { 00:04:28.794 "timeout_sec": 30 00:04:28.794 } 00:04:28.794 }, 00:04:28.794 { 00:04:28.794 "method": "bdev_nvme_set_options", 00:04:28.794 "params": { 00:04:28.794 "action_on_timeout": "none", 00:04:28.794 "timeout_us": 0, 00:04:28.794 "timeout_admin_us": 0, 00:04:28.794 "keep_alive_timeout_ms": 10000, 00:04:28.794 "arbitration_burst": 0, 00:04:28.794 "low_priority_weight": 0, 00:04:28.794 "medium_priority_weight": 0, 00:04:28.794 "high_priority_weight": 0, 00:04:28.794 "nvme_adminq_poll_period_us": 10000, 00:04:28.794 "nvme_ioq_poll_period_us": 0, 00:04:28.794 "io_queue_requests": 0, 00:04:28.794 "delay_cmd_submit": true, 00:04:28.794 "transport_retry_count": 4, 00:04:28.794 "bdev_retry_count": 3, 00:04:28.794 "transport_ack_timeout": 0, 00:04:28.794 "ctrlr_loss_timeout_sec": 0, 00:04:28.794 "reconnect_delay_sec": 0, 00:04:28.794 "fast_io_fail_timeout_sec": 0, 00:04:28.794 "disable_auto_failback": false, 00:04:28.794 "generate_uuids": false, 00:04:28.794 "transport_tos": 0, 00:04:28.794 "nvme_error_stat": false, 00:04:28.794 "rdma_srq_size": 0, 00:04:28.794 "io_path_stat": false, 00:04:28.794 "allow_accel_sequence": false, 00:04:28.794 "rdma_max_cq_size": 0, 00:04:28.794 "rdma_cm_event_timeout_ms": 0, 00:04:28.794 "dhchap_digests": [ 00:04:28.794 "sha256", 00:04:28.794 "sha384", 00:04:28.794 "sha512" 00:04:28.794 ], 00:04:28.794 "dhchap_dhgroups": [ 00:04:28.795 "null", 00:04:28.795 "ffdhe2048", 00:04:28.795 "ffdhe3072", 00:04:28.795 "ffdhe4096", 00:04:28.795 "ffdhe6144", 00:04:28.795 "ffdhe8192" 00:04:28.795 ] 00:04:28.795 } 00:04:28.795 }, 00:04:28.795 { 00:04:28.795 "method": "bdev_nvme_set_hotplug", 00:04:28.795 "params": { 00:04:28.795 "period_us": 100000, 00:04:28.795 "enable": false 00:04:28.795 } 00:04:28.795 }, 00:04:28.795 { 00:04:28.795 "method": "bdev_wait_for_examine" 00:04:28.795 } 00:04:28.795 ] 00:04:28.795 }, 00:04:28.795 { 00:04:28.795 "subsystem": "scsi", 00:04:28.795 "config": null 00:04:28.795 }, 00:04:28.795 { 00:04:28.795 "subsystem": "scheduler", 00:04:28.795 "config": [ 00:04:28.795 { 00:04:28.795 "method": "framework_set_scheduler", 00:04:28.795 "params": { 00:04:28.795 "name": "static" 00:04:28.795 } 00:04:28.795 } 00:04:28.795 ] 00:04:28.795 }, 00:04:28.795 { 00:04:28.795 "subsystem": "vhost_scsi", 00:04:28.795 "config": [] 00:04:28.795 }, 00:04:28.795 { 00:04:28.795 "subsystem": "vhost_blk", 00:04:28.795 "config": [] 00:04:28.795 }, 00:04:28.795 { 00:04:28.795 "subsystem": "ublk", 00:04:28.795 "config": [] 00:04:28.795 }, 00:04:28.795 { 00:04:28.795 "subsystem": "nbd", 00:04:28.795 "config": [] 00:04:28.795 }, 00:04:28.795 { 00:04:28.795 "subsystem": "nvmf", 00:04:28.795 "config": [ 00:04:28.795 { 00:04:28.795 "method": "nvmf_set_config", 00:04:28.795 "params": { 00:04:28.795 "discovery_filter": "match_any", 00:04:28.795 "admin_cmd_passthru": { 00:04:28.795 "identify_ctrlr": false 00:04:28.795 }, 00:04:28.795 "dhchap_digests": [ 00:04:28.795 "sha256", 00:04:28.795 "sha384", 00:04:28.795 "sha512" 00:04:28.795 ], 00:04:28.795 "dhchap_dhgroups": [ 00:04:28.795 "null", 00:04:28.795 "ffdhe2048", 00:04:28.795 "ffdhe3072", 00:04:28.795 "ffdhe4096", 00:04:28.795 "ffdhe6144", 00:04:28.795 "ffdhe8192" 00:04:28.795 ] 00:04:28.795 } 00:04:28.795 }, 00:04:28.795 { 00:04:28.795 "method": "nvmf_set_max_subsystems", 00:04:28.795 "params": { 00:04:28.795 "max_subsystems": 1024 00:04:28.795 } 00:04:28.795 }, 00:04:28.795 { 00:04:28.795 "method": "nvmf_set_crdt", 00:04:28.795 "params": { 00:04:28.795 "crdt1": 0, 00:04:28.795 "crdt2": 0, 00:04:28.795 "crdt3": 0 00:04:28.795 } 00:04:28.795 }, 00:04:28.795 { 00:04:28.795 "method": "nvmf_create_transport", 00:04:28.795 "params": { 00:04:28.795 "trtype": "TCP", 00:04:28.795 "max_queue_depth": 128, 00:04:28.795 "max_io_qpairs_per_ctrlr": 127, 00:04:28.795 "in_capsule_data_size": 4096, 00:04:28.795 "max_io_size": 131072, 00:04:28.795 "io_unit_size": 131072, 00:04:28.795 "max_aq_depth": 128, 00:04:28.795 "num_shared_buffers": 511, 00:04:28.795 "buf_cache_size": 4294967295, 00:04:28.795 "dif_insert_or_strip": false, 00:04:28.795 "zcopy": false, 00:04:28.795 "c2h_success": true, 00:04:28.795 "sock_priority": 0, 00:04:28.795 "abort_timeout_sec": 1, 00:04:28.795 "ack_timeout": 0, 00:04:28.795 "data_wr_pool_size": 0 00:04:28.795 } 00:04:28.795 } 00:04:28.795 ] 00:04:28.795 }, 00:04:28.795 { 00:04:28.795 "subsystem": "iscsi", 00:04:28.795 "config": [ 00:04:28.795 { 00:04:28.795 "method": "iscsi_set_options", 00:04:28.795 "params": { 00:04:28.795 "node_base": "iqn.2016-06.io.spdk", 00:04:28.795 "max_sessions": 128, 00:04:28.795 "max_connections_per_session": 2, 00:04:28.795 "max_queue_depth": 64, 00:04:28.795 "default_time2wait": 2, 00:04:28.795 "default_time2retain": 20, 00:04:28.795 "first_burst_length": 8192, 00:04:28.795 "immediate_data": true, 00:04:28.795 "allow_duplicated_isid": false, 00:04:28.795 "error_recovery_level": 0, 00:04:28.795 "nop_timeout": 60, 00:04:28.795 "nop_in_interval": 30, 00:04:28.795 "disable_chap": false, 00:04:28.795 "require_chap": false, 00:04:28.795 "mutual_chap": false, 00:04:28.795 "chap_group": 0, 00:04:28.795 "max_large_datain_per_connection": 64, 00:04:28.795 "max_r2t_per_connection": 4, 00:04:28.795 "pdu_pool_size": 36864, 00:04:28.795 "immediate_data_pool_size": 16384, 00:04:28.795 "data_out_pool_size": 2048 00:04:28.795 } 00:04:28.795 } 00:04:28.795 ] 00:04:28.795 } 00:04:28.795 ] 00:04:28.795 } 00:04:28.795 09:41:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:28.795 09:41:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57254 00:04:28.795 09:41:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57254 ']' 00:04:28.795 09:41:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57254 00:04:28.795 09:41:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:28.795 09:41:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.795 09:41:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57254 00:04:28.795 09:41:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.795 09:41:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.795 killing process with pid 57254 00:04:28.795 09:41:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57254' 00:04:28.795 09:41:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57254 00:04:28.795 09:41:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57254 00:04:31.327 09:41:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57309 00:04:31.327 09:41:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:31.327 09:41:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:36.598 09:42:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57309 00:04:36.598 09:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57309 ']' 00:04:36.598 09:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57309 00:04:36.598 09:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:36.598 09:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.598 09:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57309 00:04:36.598 09:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:36.598 09:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:36.598 killing process with pid 57309 00:04:36.598 09:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57309' 00:04:36.598 09:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57309 00:04:36.598 09:42:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57309 00:04:39.135 09:42:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:39.135 09:42:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:39.135 00:04:39.135 real 0m11.508s 00:04:39.135 user 0m11.022s 00:04:39.135 sys 0m0.863s 00:04:39.135 09:42:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.135 09:42:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:39.135 ************************************ 00:04:39.135 END TEST skip_rpc_with_json 00:04:39.135 ************************************ 00:04:39.135 09:42:03 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:39.135 09:42:03 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.135 09:42:03 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.135 09:42:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.135 ************************************ 00:04:39.135 START TEST skip_rpc_with_delay 00:04:39.135 ************************************ 00:04:39.135 09:42:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:39.135 09:42:03 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:39.135 09:42:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:39.135 09:42:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:39.135 09:42:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.135 09:42:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:39.135 09:42:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.135 09:42:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:39.135 09:42:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.135 09:42:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:39.135 09:42:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.135 09:42:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:39.135 09:42:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:39.135 [2024-12-06 09:42:03.973345] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:39.135 09:42:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:39.135 09:42:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:39.135 09:42:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:39.135 09:42:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:39.135 00:04:39.135 real 0m0.173s 00:04:39.135 user 0m0.092s 00:04:39.135 sys 0m0.080s 00:04:39.135 09:42:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.135 09:42:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:39.135 ************************************ 00:04:39.135 END TEST skip_rpc_with_delay 00:04:39.135 ************************************ 00:04:39.135 09:42:04 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:39.135 09:42:04 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:39.135 09:42:04 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:39.135 09:42:04 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.135 09:42:04 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.135 09:42:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.135 ************************************ 00:04:39.135 START TEST exit_on_failed_rpc_init 00:04:39.135 ************************************ 00:04:39.135 09:42:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:39.135 09:42:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57438 00:04:39.135 09:42:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:39.135 09:42:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57438 00:04:39.135 09:42:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57438 ']' 00:04:39.135 09:42:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.135 09:42:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.135 09:42:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.135 09:42:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.135 09:42:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:39.135 [2024-12-06 09:42:04.203025] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:04:39.135 [2024-12-06 09:42:04.203180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57438 ] 00:04:39.135 [2024-12-06 09:42:04.377595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.393 [2024-12-06 09:42:04.498901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.324 09:42:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.324 09:42:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:40.324 09:42:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:40.324 09:42:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:40.324 09:42:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:40.324 09:42:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:40.324 09:42:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.324 09:42:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.324 09:42:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.324 09:42:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.324 09:42:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.324 09:42:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.324 09:42:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.324 09:42:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:40.324 09:42:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:40.324 [2024-12-06 09:42:05.496664] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:04:40.324 [2024-12-06 09:42:05.496826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57456 ] 00:04:40.582 [2024-12-06 09:42:05.652476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.582 [2024-12-06 09:42:05.791914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.582 [2024-12-06 09:42:05.792014] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:40.582 [2024-12-06 09:42:05.792030] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:40.582 [2024-12-06 09:42:05.792043] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:40.841 09:42:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:40.841 09:42:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:40.841 09:42:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:40.841 09:42:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:40.841 09:42:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:40.841 09:42:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:40.841 09:42:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:40.841 09:42:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57438 00:04:40.841 09:42:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57438 ']' 00:04:40.841 09:42:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57438 00:04:40.841 09:42:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:40.841 09:42:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.841 09:42:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57438 00:04:40.841 09:42:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.841 09:42:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.841 killing process with pid 57438 00:04:40.841 09:42:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57438' 00:04:40.841 09:42:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57438 00:04:40.841 09:42:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57438 00:04:43.373 00:04:43.373 real 0m4.425s 00:04:43.373 user 0m4.773s 00:04:43.373 sys 0m0.565s 00:04:43.373 09:42:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.373 09:42:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:43.373 ************************************ 00:04:43.373 END TEST exit_on_failed_rpc_init 00:04:43.373 ************************************ 00:04:43.373 09:42:08 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:43.373 00:04:43.373 real 0m24.126s 00:04:43.373 user 0m23.170s 00:04:43.373 sys 0m2.170s 00:04:43.373 09:42:08 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.373 09:42:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.373 ************************************ 00:04:43.373 END TEST skip_rpc 00:04:43.373 ************************************ 00:04:43.373 09:42:08 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:43.373 09:42:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.373 09:42:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.373 09:42:08 -- common/autotest_common.sh@10 -- # set +x 00:04:43.633 ************************************ 00:04:43.633 START TEST rpc_client 00:04:43.633 ************************************ 00:04:43.633 09:42:08 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:43.633 * Looking for test storage... 00:04:43.633 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:43.633 09:42:08 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:43.633 09:42:08 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:43.633 09:42:08 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:43.633 09:42:08 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:43.633 09:42:08 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.633 09:42:08 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.633 09:42:08 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.633 09:42:08 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.633 09:42:08 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.633 09:42:08 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.633 09:42:08 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.633 09:42:08 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.633 09:42:08 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.633 09:42:08 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.633 09:42:08 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.633 09:42:08 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:43.633 09:42:08 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:43.633 09:42:08 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.633 09:42:08 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.633 09:42:08 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:43.633 09:42:08 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:43.633 09:42:08 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.633 09:42:08 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:43.633 09:42:08 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.633 09:42:08 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:43.633 09:42:08 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:43.633 09:42:08 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.633 09:42:08 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:43.633 09:42:08 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.633 09:42:08 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.633 09:42:08 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.633 09:42:08 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:43.633 09:42:08 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.633 09:42:08 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:43.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.633 --rc genhtml_branch_coverage=1 00:04:43.633 --rc genhtml_function_coverage=1 00:04:43.633 --rc genhtml_legend=1 00:04:43.633 --rc geninfo_all_blocks=1 00:04:43.633 --rc geninfo_unexecuted_blocks=1 00:04:43.633 00:04:43.633 ' 00:04:43.633 09:42:08 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:43.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.633 --rc genhtml_branch_coverage=1 00:04:43.633 --rc genhtml_function_coverage=1 00:04:43.633 --rc genhtml_legend=1 00:04:43.633 --rc geninfo_all_blocks=1 00:04:43.633 --rc geninfo_unexecuted_blocks=1 00:04:43.633 00:04:43.633 ' 00:04:43.633 09:42:08 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:43.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.633 --rc genhtml_branch_coverage=1 00:04:43.633 --rc genhtml_function_coverage=1 00:04:43.633 --rc genhtml_legend=1 00:04:43.633 --rc geninfo_all_blocks=1 00:04:43.633 --rc geninfo_unexecuted_blocks=1 00:04:43.633 00:04:43.633 ' 00:04:43.633 09:42:08 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:43.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.633 --rc genhtml_branch_coverage=1 00:04:43.633 --rc genhtml_function_coverage=1 00:04:43.633 --rc genhtml_legend=1 00:04:43.633 --rc geninfo_all_blocks=1 00:04:43.633 --rc geninfo_unexecuted_blocks=1 00:04:43.633 00:04:43.633 ' 00:04:43.633 09:42:08 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:43.892 OK 00:04:43.892 09:42:08 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:43.892 00:04:43.892 real 0m0.305s 00:04:43.892 user 0m0.164s 00:04:43.892 sys 0m0.159s 00:04:43.892 09:42:08 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.892 09:42:08 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:43.892 ************************************ 00:04:43.892 END TEST rpc_client 00:04:43.892 ************************************ 00:04:43.892 09:42:09 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:43.892 09:42:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.892 09:42:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.892 09:42:09 -- common/autotest_common.sh@10 -- # set +x 00:04:43.892 ************************************ 00:04:43.892 START TEST json_config 00:04:43.892 ************************************ 00:04:43.892 09:42:09 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:43.892 09:42:09 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:43.892 09:42:09 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:43.892 09:42:09 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:44.151 09:42:09 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:44.151 09:42:09 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.151 09:42:09 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.151 09:42:09 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.151 09:42:09 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.151 09:42:09 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.151 09:42:09 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.151 09:42:09 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.151 09:42:09 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.151 09:42:09 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.151 09:42:09 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.151 09:42:09 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.151 09:42:09 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:44.151 09:42:09 json_config -- scripts/common.sh@345 -- # : 1 00:04:44.151 09:42:09 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.151 09:42:09 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.151 09:42:09 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:44.151 09:42:09 json_config -- scripts/common.sh@353 -- # local d=1 00:04:44.151 09:42:09 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.151 09:42:09 json_config -- scripts/common.sh@355 -- # echo 1 00:04:44.151 09:42:09 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.151 09:42:09 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:44.151 09:42:09 json_config -- scripts/common.sh@353 -- # local d=2 00:04:44.151 09:42:09 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.151 09:42:09 json_config -- scripts/common.sh@355 -- # echo 2 00:04:44.151 09:42:09 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.151 09:42:09 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.151 09:42:09 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.151 09:42:09 json_config -- scripts/common.sh@368 -- # return 0 00:04:44.151 09:42:09 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.151 09:42:09 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:44.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.151 --rc genhtml_branch_coverage=1 00:04:44.151 --rc genhtml_function_coverage=1 00:04:44.151 --rc genhtml_legend=1 00:04:44.151 --rc geninfo_all_blocks=1 00:04:44.151 --rc geninfo_unexecuted_blocks=1 00:04:44.151 00:04:44.151 ' 00:04:44.151 09:42:09 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:44.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.151 --rc genhtml_branch_coverage=1 00:04:44.151 --rc genhtml_function_coverage=1 00:04:44.151 --rc genhtml_legend=1 00:04:44.151 --rc geninfo_all_blocks=1 00:04:44.151 --rc geninfo_unexecuted_blocks=1 00:04:44.151 00:04:44.151 ' 00:04:44.151 09:42:09 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:44.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.151 --rc genhtml_branch_coverage=1 00:04:44.151 --rc genhtml_function_coverage=1 00:04:44.151 --rc genhtml_legend=1 00:04:44.151 --rc geninfo_all_blocks=1 00:04:44.151 --rc geninfo_unexecuted_blocks=1 00:04:44.151 00:04:44.151 ' 00:04:44.151 09:42:09 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:44.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.151 --rc genhtml_branch_coverage=1 00:04:44.151 --rc genhtml_function_coverage=1 00:04:44.151 --rc genhtml_legend=1 00:04:44.151 --rc geninfo_all_blocks=1 00:04:44.151 --rc geninfo_unexecuted_blocks=1 00:04:44.151 00:04:44.151 ' 00:04:44.151 09:42:09 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:44.151 09:42:09 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:44.151 09:42:09 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:44.151 09:42:09 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:44.151 09:42:09 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:44.151 09:42:09 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:44.151 09:42:09 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:44.151 09:42:09 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:44.151 09:42:09 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:44.151 09:42:09 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:44.151 09:42:09 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:44.151 09:42:09 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:44.151 09:42:09 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:884f5f63-3933-4296-a08b-b3110049e843 00:04:44.151 09:42:09 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=884f5f63-3933-4296-a08b-b3110049e843 00:04:44.151 09:42:09 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:44.152 09:42:09 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:44.152 09:42:09 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:44.152 09:42:09 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:44.152 09:42:09 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:44.152 09:42:09 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:44.152 09:42:09 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:44.152 09:42:09 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:44.152 09:42:09 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:44.152 09:42:09 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.152 09:42:09 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.152 09:42:09 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.152 09:42:09 json_config -- paths/export.sh@5 -- # export PATH 00:04:44.152 09:42:09 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.152 09:42:09 json_config -- nvmf/common.sh@51 -- # : 0 00:04:44.152 09:42:09 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:44.152 09:42:09 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:44.152 09:42:09 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:44.152 09:42:09 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:44.152 09:42:09 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:44.152 09:42:09 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:44.152 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:44.152 09:42:09 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:44.152 09:42:09 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:44.152 09:42:09 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:44.152 09:42:09 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:44.152 09:42:09 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:44.152 09:42:09 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:44.152 09:42:09 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:44.152 09:42:09 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:44.152 WARNING: No tests are enabled so not running JSON configuration tests 00:04:44.152 09:42:09 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:44.152 09:42:09 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:44.152 00:04:44.152 real 0m0.227s 00:04:44.152 user 0m0.137s 00:04:44.152 sys 0m0.097s 00:04:44.152 09:42:09 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.152 09:42:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.152 ************************************ 00:04:44.152 END TEST json_config 00:04:44.152 ************************************ 00:04:44.152 09:42:09 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:44.152 09:42:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.152 09:42:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.152 09:42:09 -- common/autotest_common.sh@10 -- # set +x 00:04:44.152 ************************************ 00:04:44.152 START TEST json_config_extra_key 00:04:44.152 ************************************ 00:04:44.152 09:42:09 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:44.152 09:42:09 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:44.152 09:42:09 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:44.152 09:42:09 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:44.411 09:42:09 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:44.411 09:42:09 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.411 09:42:09 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.411 09:42:09 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.411 09:42:09 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.411 09:42:09 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.411 09:42:09 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.411 09:42:09 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.411 09:42:09 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.411 09:42:09 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.411 09:42:09 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.411 09:42:09 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.411 09:42:09 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:44.411 09:42:09 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:44.411 09:42:09 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.411 09:42:09 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.411 09:42:09 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:44.411 09:42:09 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:44.411 09:42:09 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.411 09:42:09 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:44.411 09:42:09 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.411 09:42:09 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:44.411 09:42:09 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:44.411 09:42:09 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.411 09:42:09 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:44.411 09:42:09 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.411 09:42:09 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.411 09:42:09 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.411 09:42:09 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:44.411 09:42:09 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.411 09:42:09 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:44.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.411 --rc genhtml_branch_coverage=1 00:04:44.411 --rc genhtml_function_coverage=1 00:04:44.411 --rc genhtml_legend=1 00:04:44.411 --rc geninfo_all_blocks=1 00:04:44.411 --rc geninfo_unexecuted_blocks=1 00:04:44.411 00:04:44.411 ' 00:04:44.411 09:42:09 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:44.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.411 --rc genhtml_branch_coverage=1 00:04:44.411 --rc genhtml_function_coverage=1 00:04:44.411 --rc genhtml_legend=1 00:04:44.411 --rc geninfo_all_blocks=1 00:04:44.411 --rc geninfo_unexecuted_blocks=1 00:04:44.411 00:04:44.411 ' 00:04:44.412 09:42:09 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:44.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.412 --rc genhtml_branch_coverage=1 00:04:44.412 --rc genhtml_function_coverage=1 00:04:44.412 --rc genhtml_legend=1 00:04:44.412 --rc geninfo_all_blocks=1 00:04:44.412 --rc geninfo_unexecuted_blocks=1 00:04:44.412 00:04:44.412 ' 00:04:44.412 09:42:09 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:44.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.412 --rc genhtml_branch_coverage=1 00:04:44.412 --rc genhtml_function_coverage=1 00:04:44.412 --rc genhtml_legend=1 00:04:44.412 --rc geninfo_all_blocks=1 00:04:44.412 --rc geninfo_unexecuted_blocks=1 00:04:44.412 00:04:44.412 ' 00:04:44.412 09:42:09 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:44.412 09:42:09 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:44.412 09:42:09 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:44.412 09:42:09 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:44.412 09:42:09 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:44.412 09:42:09 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:44.412 09:42:09 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:44.412 09:42:09 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:44.412 09:42:09 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:44.412 09:42:09 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:44.412 09:42:09 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:44.412 09:42:09 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:44.412 09:42:09 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:884f5f63-3933-4296-a08b-b3110049e843 00:04:44.412 09:42:09 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=884f5f63-3933-4296-a08b-b3110049e843 00:04:44.412 09:42:09 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:44.412 09:42:09 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:44.412 09:42:09 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:44.412 09:42:09 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:44.412 09:42:09 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:44.412 09:42:09 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:44.412 09:42:09 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:44.412 09:42:09 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:44.412 09:42:09 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:44.412 09:42:09 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.412 09:42:09 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.412 09:42:09 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.412 09:42:09 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:44.412 09:42:09 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.412 09:42:09 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:44.412 09:42:09 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:44.412 09:42:09 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:44.412 09:42:09 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:44.412 09:42:09 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:44.412 09:42:09 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:44.412 09:42:09 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:44.412 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:44.412 09:42:09 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:44.412 09:42:09 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:44.412 09:42:09 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:44.412 09:42:09 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:44.412 09:42:09 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:44.412 09:42:09 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:44.412 09:42:09 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:44.412 09:42:09 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:44.412 09:42:09 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:44.412 09:42:09 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:44.412 09:42:09 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:44.412 09:42:09 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:44.412 09:42:09 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:44.412 INFO: launching applications... 00:04:44.412 09:42:09 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:44.412 09:42:09 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:44.412 09:42:09 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:44.412 09:42:09 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:44.412 09:42:09 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:44.412 09:42:09 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:44.412 09:42:09 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:44.412 09:42:09 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:44.412 09:42:09 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:44.412 09:42:09 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57666 00:04:44.412 Waiting for target to run... 00:04:44.412 09:42:09 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:44.412 09:42:09 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57666 /var/tmp/spdk_tgt.sock 00:04:44.412 09:42:09 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57666 ']' 00:04:44.412 09:42:09 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:44.412 09:42:09 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:44.412 09:42:09 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:44.412 09:42:09 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:44.412 09:42:09 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.412 09:42:09 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:44.412 [2024-12-06 09:42:09.612795] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:04:44.412 [2024-12-06 09:42:09.612939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57666 ] 00:04:44.981 [2024-12-06 09:42:10.002325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.981 [2024-12-06 09:42:10.110312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.919 09:42:10 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.919 09:42:10 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:45.919 00:04:45.919 09:42:10 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:45.919 INFO: shutting down applications... 00:04:45.919 09:42:10 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:45.919 09:42:10 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:45.919 09:42:10 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:45.919 09:42:10 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:45.919 09:42:10 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57666 ]] 00:04:45.919 09:42:10 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57666 00:04:45.919 09:42:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:45.919 09:42:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:45.919 09:42:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57666 00:04:45.919 09:42:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:46.181 09:42:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:46.181 09:42:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.181 09:42:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57666 00:04:46.181 09:42:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:46.754 09:42:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:46.754 09:42:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.754 09:42:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57666 00:04:46.754 09:42:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:47.325 09:42:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:47.325 09:42:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.325 09:42:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57666 00:04:47.325 09:42:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:47.895 09:42:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:47.895 09:42:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.895 09:42:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57666 00:04:47.895 09:42:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:48.155 09:42:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:48.155 09:42:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:48.155 09:42:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57666 00:04:48.155 09:42:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:48.725 09:42:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:48.725 09:42:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:48.725 09:42:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57666 00:04:48.725 09:42:13 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:48.725 09:42:13 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:48.725 09:42:13 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:48.725 SPDK target shutdown done 00:04:48.725 09:42:13 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:48.725 Success 00:04:48.725 09:42:13 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:48.725 00:04:48.725 real 0m4.605s 00:04:48.725 user 0m4.153s 00:04:48.725 sys 0m0.557s 00:04:48.725 09:42:13 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.725 09:42:13 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:48.725 ************************************ 00:04:48.725 END TEST json_config_extra_key 00:04:48.725 ************************************ 00:04:48.725 09:42:13 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:48.725 09:42:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.725 09:42:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.725 09:42:13 -- common/autotest_common.sh@10 -- # set +x 00:04:48.725 ************************************ 00:04:48.725 START TEST alias_rpc 00:04:48.725 ************************************ 00:04:48.725 09:42:13 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:48.985 * Looking for test storage... 00:04:48.985 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:48.985 09:42:14 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:48.985 09:42:14 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:48.985 09:42:14 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:48.986 09:42:14 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:48.986 09:42:14 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.986 09:42:14 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.986 09:42:14 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.986 09:42:14 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.986 09:42:14 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.986 09:42:14 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.986 09:42:14 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.986 09:42:14 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.986 09:42:14 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.986 09:42:14 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.986 09:42:14 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.986 09:42:14 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:48.986 09:42:14 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:48.986 09:42:14 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.986 09:42:14 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.986 09:42:14 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:48.986 09:42:14 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:48.986 09:42:14 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.986 09:42:14 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:48.986 09:42:14 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.986 09:42:14 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:48.986 09:42:14 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:48.986 09:42:14 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.986 09:42:14 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:48.986 09:42:14 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.986 09:42:14 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.986 09:42:14 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.986 09:42:14 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:48.986 09:42:14 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.986 09:42:14 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:48.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.986 --rc genhtml_branch_coverage=1 00:04:48.986 --rc genhtml_function_coverage=1 00:04:48.986 --rc genhtml_legend=1 00:04:48.986 --rc geninfo_all_blocks=1 00:04:48.986 --rc geninfo_unexecuted_blocks=1 00:04:48.986 00:04:48.986 ' 00:04:48.986 09:42:14 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:48.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.986 --rc genhtml_branch_coverage=1 00:04:48.986 --rc genhtml_function_coverage=1 00:04:48.986 --rc genhtml_legend=1 00:04:48.986 --rc geninfo_all_blocks=1 00:04:48.986 --rc geninfo_unexecuted_blocks=1 00:04:48.986 00:04:48.986 ' 00:04:48.986 09:42:14 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:48.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.986 --rc genhtml_branch_coverage=1 00:04:48.986 --rc genhtml_function_coverage=1 00:04:48.986 --rc genhtml_legend=1 00:04:48.986 --rc geninfo_all_blocks=1 00:04:48.986 --rc geninfo_unexecuted_blocks=1 00:04:48.986 00:04:48.986 ' 00:04:48.986 09:42:14 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:48.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.986 --rc genhtml_branch_coverage=1 00:04:48.986 --rc genhtml_function_coverage=1 00:04:48.986 --rc genhtml_legend=1 00:04:48.986 --rc geninfo_all_blocks=1 00:04:48.986 --rc geninfo_unexecuted_blocks=1 00:04:48.986 00:04:48.986 ' 00:04:48.986 09:42:14 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:48.986 09:42:14 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57782 00:04:48.986 09:42:14 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57782 00:04:48.986 09:42:14 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57782 ']' 00:04:48.986 09:42:14 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.986 09:42:14 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.986 09:42:14 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.986 09:42:14 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:48.986 09:42:14 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.986 09:42:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.246 [2024-12-06 09:42:14.310206] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:04:49.246 [2024-12-06 09:42:14.310347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57782 ] 00:04:49.246 [2024-12-06 09:42:14.486130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.504 [2024-12-06 09:42:14.603601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.440 09:42:15 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.440 09:42:15 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:50.440 09:42:15 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:50.699 09:42:15 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57782 00:04:50.699 09:42:15 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57782 ']' 00:04:50.699 09:42:15 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57782 00:04:50.699 09:42:15 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:50.699 09:42:15 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.699 09:42:15 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57782 00:04:50.699 09:42:15 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.699 09:42:15 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.699 killing process with pid 57782 00:04:50.699 09:42:15 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57782' 00:04:50.699 09:42:15 alias_rpc -- common/autotest_common.sh@973 -- # kill 57782 00:04:50.699 09:42:15 alias_rpc -- common/autotest_common.sh@978 -- # wait 57782 00:04:53.234 00:04:53.234 real 0m4.205s 00:04:53.234 user 0m4.204s 00:04:53.234 sys 0m0.558s 00:04:53.234 09:42:18 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.234 ************************************ 00:04:53.234 END TEST alias_rpc 00:04:53.234 ************************************ 00:04:53.234 09:42:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.234 09:42:18 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:53.234 09:42:18 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:53.234 09:42:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.234 09:42:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.234 09:42:18 -- common/autotest_common.sh@10 -- # set +x 00:04:53.234 ************************************ 00:04:53.234 START TEST spdkcli_tcp 00:04:53.234 ************************************ 00:04:53.234 09:42:18 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:53.234 * Looking for test storage... 00:04:53.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:53.234 09:42:18 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:53.234 09:42:18 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:53.234 09:42:18 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:53.234 09:42:18 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:53.234 09:42:18 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.234 09:42:18 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.234 09:42:18 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.234 09:42:18 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.234 09:42:18 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.234 09:42:18 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.234 09:42:18 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.234 09:42:18 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.234 09:42:18 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.234 09:42:18 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.234 09:42:18 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.234 09:42:18 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:53.234 09:42:18 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:53.234 09:42:18 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.234 09:42:18 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.234 09:42:18 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:53.234 09:42:18 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:53.234 09:42:18 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.234 09:42:18 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:53.234 09:42:18 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.234 09:42:18 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:53.234 09:42:18 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:53.234 09:42:18 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.234 09:42:18 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:53.234 09:42:18 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.234 09:42:18 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.234 09:42:18 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.234 09:42:18 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:53.234 09:42:18 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.234 09:42:18 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:53.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.234 --rc genhtml_branch_coverage=1 00:04:53.235 --rc genhtml_function_coverage=1 00:04:53.235 --rc genhtml_legend=1 00:04:53.235 --rc geninfo_all_blocks=1 00:04:53.235 --rc geninfo_unexecuted_blocks=1 00:04:53.235 00:04:53.235 ' 00:04:53.235 09:42:18 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:53.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.235 --rc genhtml_branch_coverage=1 00:04:53.235 --rc genhtml_function_coverage=1 00:04:53.235 --rc genhtml_legend=1 00:04:53.235 --rc geninfo_all_blocks=1 00:04:53.235 --rc geninfo_unexecuted_blocks=1 00:04:53.235 00:04:53.235 ' 00:04:53.235 09:42:18 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:53.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.235 --rc genhtml_branch_coverage=1 00:04:53.235 --rc genhtml_function_coverage=1 00:04:53.235 --rc genhtml_legend=1 00:04:53.235 --rc geninfo_all_blocks=1 00:04:53.235 --rc geninfo_unexecuted_blocks=1 00:04:53.235 00:04:53.235 ' 00:04:53.235 09:42:18 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:53.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.235 --rc genhtml_branch_coverage=1 00:04:53.235 --rc genhtml_function_coverage=1 00:04:53.235 --rc genhtml_legend=1 00:04:53.235 --rc geninfo_all_blocks=1 00:04:53.235 --rc geninfo_unexecuted_blocks=1 00:04:53.235 00:04:53.235 ' 00:04:53.235 09:42:18 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:53.235 09:42:18 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:53.235 09:42:18 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:53.235 09:42:18 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:53.235 09:42:18 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:53.235 09:42:18 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:53.235 09:42:18 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:53.235 09:42:18 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:53.235 09:42:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:53.235 09:42:18 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:53.235 09:42:18 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57890 00:04:53.235 09:42:18 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57890 00:04:53.235 09:42:18 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57890 ']' 00:04:53.235 09:42:18 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.235 09:42:18 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.235 09:42:18 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.235 09:42:18 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.235 09:42:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:53.494 [2024-12-06 09:42:18.553699] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:04:53.494 [2024-12-06 09:42:18.553897] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57890 ] 00:04:53.494 [2024-12-06 09:42:18.741684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:53.753 [2024-12-06 09:42:18.856351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.753 [2024-12-06 09:42:18.856393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.691 09:42:19 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.691 09:42:19 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:54.691 09:42:19 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:54.691 09:42:19 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57907 00:04:54.691 09:42:19 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:54.951 [ 00:04:54.951 "bdev_malloc_delete", 00:04:54.951 "bdev_malloc_create", 00:04:54.951 "bdev_null_resize", 00:04:54.951 "bdev_null_delete", 00:04:54.951 "bdev_null_create", 00:04:54.951 "bdev_nvme_cuse_unregister", 00:04:54.951 "bdev_nvme_cuse_register", 00:04:54.951 "bdev_opal_new_user", 00:04:54.951 "bdev_opal_set_lock_state", 00:04:54.951 "bdev_opal_delete", 00:04:54.951 "bdev_opal_get_info", 00:04:54.951 "bdev_opal_create", 00:04:54.951 "bdev_nvme_opal_revert", 00:04:54.951 "bdev_nvme_opal_init", 00:04:54.951 "bdev_nvme_send_cmd", 00:04:54.951 "bdev_nvme_set_keys", 00:04:54.951 "bdev_nvme_get_path_iostat", 00:04:54.951 "bdev_nvme_get_mdns_discovery_info", 00:04:54.951 "bdev_nvme_stop_mdns_discovery", 00:04:54.951 "bdev_nvme_start_mdns_discovery", 00:04:54.951 "bdev_nvme_set_multipath_policy", 00:04:54.951 "bdev_nvme_set_preferred_path", 00:04:54.951 "bdev_nvme_get_io_paths", 00:04:54.951 "bdev_nvme_remove_error_injection", 00:04:54.951 "bdev_nvme_add_error_injection", 00:04:54.951 "bdev_nvme_get_discovery_info", 00:04:54.951 "bdev_nvme_stop_discovery", 00:04:54.951 "bdev_nvme_start_discovery", 00:04:54.951 "bdev_nvme_get_controller_health_info", 00:04:54.951 "bdev_nvme_disable_controller", 00:04:54.951 "bdev_nvme_enable_controller", 00:04:54.951 "bdev_nvme_reset_controller", 00:04:54.951 "bdev_nvme_get_transport_statistics", 00:04:54.951 "bdev_nvme_apply_firmware", 00:04:54.951 "bdev_nvme_detach_controller", 00:04:54.951 "bdev_nvme_get_controllers", 00:04:54.951 "bdev_nvme_attach_controller", 00:04:54.951 "bdev_nvme_set_hotplug", 00:04:54.951 "bdev_nvme_set_options", 00:04:54.951 "bdev_passthru_delete", 00:04:54.951 "bdev_passthru_create", 00:04:54.951 "bdev_lvol_set_parent_bdev", 00:04:54.951 "bdev_lvol_set_parent", 00:04:54.951 "bdev_lvol_check_shallow_copy", 00:04:54.951 "bdev_lvol_start_shallow_copy", 00:04:54.951 "bdev_lvol_grow_lvstore", 00:04:54.951 "bdev_lvol_get_lvols", 00:04:54.951 "bdev_lvol_get_lvstores", 00:04:54.951 "bdev_lvol_delete", 00:04:54.951 "bdev_lvol_set_read_only", 00:04:54.951 "bdev_lvol_resize", 00:04:54.951 "bdev_lvol_decouple_parent", 00:04:54.951 "bdev_lvol_inflate", 00:04:54.951 "bdev_lvol_rename", 00:04:54.951 "bdev_lvol_clone_bdev", 00:04:54.951 "bdev_lvol_clone", 00:04:54.951 "bdev_lvol_snapshot", 00:04:54.951 "bdev_lvol_create", 00:04:54.951 "bdev_lvol_delete_lvstore", 00:04:54.951 "bdev_lvol_rename_lvstore", 00:04:54.951 "bdev_lvol_create_lvstore", 00:04:54.951 "bdev_raid_set_options", 00:04:54.951 "bdev_raid_remove_base_bdev", 00:04:54.951 "bdev_raid_add_base_bdev", 00:04:54.951 "bdev_raid_delete", 00:04:54.951 "bdev_raid_create", 00:04:54.951 "bdev_raid_get_bdevs", 00:04:54.951 "bdev_error_inject_error", 00:04:54.951 "bdev_error_delete", 00:04:54.951 "bdev_error_create", 00:04:54.951 "bdev_split_delete", 00:04:54.951 "bdev_split_create", 00:04:54.951 "bdev_delay_delete", 00:04:54.951 "bdev_delay_create", 00:04:54.951 "bdev_delay_update_latency", 00:04:54.951 "bdev_zone_block_delete", 00:04:54.951 "bdev_zone_block_create", 00:04:54.951 "blobfs_create", 00:04:54.951 "blobfs_detect", 00:04:54.951 "blobfs_set_cache_size", 00:04:54.951 "bdev_aio_delete", 00:04:54.951 "bdev_aio_rescan", 00:04:54.951 "bdev_aio_create", 00:04:54.951 "bdev_ftl_set_property", 00:04:54.951 "bdev_ftl_get_properties", 00:04:54.951 "bdev_ftl_get_stats", 00:04:54.951 "bdev_ftl_unmap", 00:04:54.951 "bdev_ftl_unload", 00:04:54.951 "bdev_ftl_delete", 00:04:54.951 "bdev_ftl_load", 00:04:54.951 "bdev_ftl_create", 00:04:54.951 "bdev_virtio_attach_controller", 00:04:54.951 "bdev_virtio_scsi_get_devices", 00:04:54.951 "bdev_virtio_detach_controller", 00:04:54.951 "bdev_virtio_blk_set_hotplug", 00:04:54.951 "bdev_iscsi_delete", 00:04:54.951 "bdev_iscsi_create", 00:04:54.951 "bdev_iscsi_set_options", 00:04:54.951 "accel_error_inject_error", 00:04:54.951 "ioat_scan_accel_module", 00:04:54.951 "dsa_scan_accel_module", 00:04:54.951 "iaa_scan_accel_module", 00:04:54.951 "keyring_file_remove_key", 00:04:54.951 "keyring_file_add_key", 00:04:54.951 "keyring_linux_set_options", 00:04:54.951 "fsdev_aio_delete", 00:04:54.951 "fsdev_aio_create", 00:04:54.951 "iscsi_get_histogram", 00:04:54.951 "iscsi_enable_histogram", 00:04:54.951 "iscsi_set_options", 00:04:54.951 "iscsi_get_auth_groups", 00:04:54.951 "iscsi_auth_group_remove_secret", 00:04:54.951 "iscsi_auth_group_add_secret", 00:04:54.951 "iscsi_delete_auth_group", 00:04:54.951 "iscsi_create_auth_group", 00:04:54.951 "iscsi_set_discovery_auth", 00:04:54.951 "iscsi_get_options", 00:04:54.951 "iscsi_target_node_request_logout", 00:04:54.951 "iscsi_target_node_set_redirect", 00:04:54.951 "iscsi_target_node_set_auth", 00:04:54.951 "iscsi_target_node_add_lun", 00:04:54.951 "iscsi_get_stats", 00:04:54.951 "iscsi_get_connections", 00:04:54.951 "iscsi_portal_group_set_auth", 00:04:54.951 "iscsi_start_portal_group", 00:04:54.951 "iscsi_delete_portal_group", 00:04:54.951 "iscsi_create_portal_group", 00:04:54.951 "iscsi_get_portal_groups", 00:04:54.951 "iscsi_delete_target_node", 00:04:54.951 "iscsi_target_node_remove_pg_ig_maps", 00:04:54.951 "iscsi_target_node_add_pg_ig_maps", 00:04:54.951 "iscsi_create_target_node", 00:04:54.951 "iscsi_get_target_nodes", 00:04:54.951 "iscsi_delete_initiator_group", 00:04:54.951 "iscsi_initiator_group_remove_initiators", 00:04:54.951 "iscsi_initiator_group_add_initiators", 00:04:54.951 "iscsi_create_initiator_group", 00:04:54.951 "iscsi_get_initiator_groups", 00:04:54.951 "nvmf_set_crdt", 00:04:54.951 "nvmf_set_config", 00:04:54.951 "nvmf_set_max_subsystems", 00:04:54.951 "nvmf_stop_mdns_prr", 00:04:54.951 "nvmf_publish_mdns_prr", 00:04:54.951 "nvmf_subsystem_get_listeners", 00:04:54.951 "nvmf_subsystem_get_qpairs", 00:04:54.951 "nvmf_subsystem_get_controllers", 00:04:54.951 "nvmf_get_stats", 00:04:54.951 "nvmf_get_transports", 00:04:54.951 "nvmf_create_transport", 00:04:54.951 "nvmf_get_targets", 00:04:54.951 "nvmf_delete_target", 00:04:54.951 "nvmf_create_target", 00:04:54.951 "nvmf_subsystem_allow_any_host", 00:04:54.951 "nvmf_subsystem_set_keys", 00:04:54.951 "nvmf_subsystem_remove_host", 00:04:54.951 "nvmf_subsystem_add_host", 00:04:54.951 "nvmf_ns_remove_host", 00:04:54.951 "nvmf_ns_add_host", 00:04:54.951 "nvmf_subsystem_remove_ns", 00:04:54.952 "nvmf_subsystem_set_ns_ana_group", 00:04:54.952 "nvmf_subsystem_add_ns", 00:04:54.952 "nvmf_subsystem_listener_set_ana_state", 00:04:54.952 "nvmf_discovery_get_referrals", 00:04:54.952 "nvmf_discovery_remove_referral", 00:04:54.952 "nvmf_discovery_add_referral", 00:04:54.952 "nvmf_subsystem_remove_listener", 00:04:54.952 "nvmf_subsystem_add_listener", 00:04:54.952 "nvmf_delete_subsystem", 00:04:54.952 "nvmf_create_subsystem", 00:04:54.952 "nvmf_get_subsystems", 00:04:54.952 "env_dpdk_get_mem_stats", 00:04:54.952 "nbd_get_disks", 00:04:54.952 "nbd_stop_disk", 00:04:54.952 "nbd_start_disk", 00:04:54.952 "ublk_recover_disk", 00:04:54.952 "ublk_get_disks", 00:04:54.952 "ublk_stop_disk", 00:04:54.952 "ublk_start_disk", 00:04:54.952 "ublk_destroy_target", 00:04:54.952 "ublk_create_target", 00:04:54.952 "virtio_blk_create_transport", 00:04:54.952 "virtio_blk_get_transports", 00:04:54.952 "vhost_controller_set_coalescing", 00:04:54.952 "vhost_get_controllers", 00:04:54.952 "vhost_delete_controller", 00:04:54.952 "vhost_create_blk_controller", 00:04:54.952 "vhost_scsi_controller_remove_target", 00:04:54.952 "vhost_scsi_controller_add_target", 00:04:54.952 "vhost_start_scsi_controller", 00:04:54.952 "vhost_create_scsi_controller", 00:04:54.952 "thread_set_cpumask", 00:04:54.952 "scheduler_set_options", 00:04:54.952 "framework_get_governor", 00:04:54.952 "framework_get_scheduler", 00:04:54.952 "framework_set_scheduler", 00:04:54.952 "framework_get_reactors", 00:04:54.952 "thread_get_io_channels", 00:04:54.952 "thread_get_pollers", 00:04:54.952 "thread_get_stats", 00:04:54.952 "framework_monitor_context_switch", 00:04:54.952 "spdk_kill_instance", 00:04:54.952 "log_enable_timestamps", 00:04:54.952 "log_get_flags", 00:04:54.952 "log_clear_flag", 00:04:54.952 "log_set_flag", 00:04:54.952 "log_get_level", 00:04:54.952 "log_set_level", 00:04:54.952 "log_get_print_level", 00:04:54.952 "log_set_print_level", 00:04:54.952 "framework_enable_cpumask_locks", 00:04:54.952 "framework_disable_cpumask_locks", 00:04:54.952 "framework_wait_init", 00:04:54.952 "framework_start_init", 00:04:54.952 "scsi_get_devices", 00:04:54.952 "bdev_get_histogram", 00:04:54.952 "bdev_enable_histogram", 00:04:54.952 "bdev_set_qos_limit", 00:04:54.952 "bdev_set_qd_sampling_period", 00:04:54.952 "bdev_get_bdevs", 00:04:54.952 "bdev_reset_iostat", 00:04:54.952 "bdev_get_iostat", 00:04:54.952 "bdev_examine", 00:04:54.952 "bdev_wait_for_examine", 00:04:54.952 "bdev_set_options", 00:04:54.952 "accel_get_stats", 00:04:54.952 "accel_set_options", 00:04:54.952 "accel_set_driver", 00:04:54.952 "accel_crypto_key_destroy", 00:04:54.952 "accel_crypto_keys_get", 00:04:54.952 "accel_crypto_key_create", 00:04:54.952 "accel_assign_opc", 00:04:54.952 "accel_get_module_info", 00:04:54.952 "accel_get_opc_assignments", 00:04:54.952 "vmd_rescan", 00:04:54.952 "vmd_remove_device", 00:04:54.952 "vmd_enable", 00:04:54.952 "sock_get_default_impl", 00:04:54.952 "sock_set_default_impl", 00:04:54.952 "sock_impl_set_options", 00:04:54.952 "sock_impl_get_options", 00:04:54.952 "iobuf_get_stats", 00:04:54.952 "iobuf_set_options", 00:04:54.952 "keyring_get_keys", 00:04:54.952 "framework_get_pci_devices", 00:04:54.952 "framework_get_config", 00:04:54.952 "framework_get_subsystems", 00:04:54.952 "fsdev_set_opts", 00:04:54.952 "fsdev_get_opts", 00:04:54.952 "trace_get_info", 00:04:54.952 "trace_get_tpoint_group_mask", 00:04:54.952 "trace_disable_tpoint_group", 00:04:54.952 "trace_enable_tpoint_group", 00:04:54.952 "trace_clear_tpoint_mask", 00:04:54.952 "trace_set_tpoint_mask", 00:04:54.952 "notify_get_notifications", 00:04:54.952 "notify_get_types", 00:04:54.952 "spdk_get_version", 00:04:54.952 "rpc_get_methods" 00:04:54.952 ] 00:04:54.952 09:42:19 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:54.952 09:42:19 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:54.952 09:42:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:54.952 09:42:20 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:54.952 09:42:20 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57890 00:04:54.952 09:42:20 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57890 ']' 00:04:54.952 09:42:20 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57890 00:04:54.952 09:42:20 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:54.952 09:42:20 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.952 09:42:20 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57890 00:04:54.952 killing process with pid 57890 00:04:54.952 09:42:20 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.952 09:42:20 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.952 09:42:20 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57890' 00:04:54.952 09:42:20 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57890 00:04:54.952 09:42:20 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57890 00:04:57.491 ************************************ 00:04:57.491 END TEST spdkcli_tcp 00:04:57.491 ************************************ 00:04:57.491 00:04:57.491 real 0m4.259s 00:04:57.491 user 0m7.640s 00:04:57.491 sys 0m0.637s 00:04:57.491 09:42:22 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.491 09:42:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:57.491 09:42:22 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:57.491 09:42:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.491 09:42:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.491 09:42:22 -- common/autotest_common.sh@10 -- # set +x 00:04:57.491 ************************************ 00:04:57.491 START TEST dpdk_mem_utility 00:04:57.491 ************************************ 00:04:57.491 09:42:22 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:57.491 * Looking for test storage... 00:04:57.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:57.491 09:42:22 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:57.491 09:42:22 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:57.491 09:42:22 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:57.491 09:42:22 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:57.491 09:42:22 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.491 09:42:22 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.491 09:42:22 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.491 09:42:22 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.491 09:42:22 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.491 09:42:22 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.491 09:42:22 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.491 09:42:22 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.491 09:42:22 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.491 09:42:22 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.491 09:42:22 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.491 09:42:22 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:57.491 09:42:22 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:57.491 09:42:22 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.491 09:42:22 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.751 09:42:22 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:57.751 09:42:22 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:57.751 09:42:22 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.751 09:42:22 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:57.751 09:42:22 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.751 09:42:22 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:57.751 09:42:22 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:57.751 09:42:22 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.751 09:42:22 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:57.751 09:42:22 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.751 09:42:22 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.751 09:42:22 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.751 09:42:22 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:57.751 09:42:22 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.751 09:42:22 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:57.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.751 --rc genhtml_branch_coverage=1 00:04:57.751 --rc genhtml_function_coverage=1 00:04:57.751 --rc genhtml_legend=1 00:04:57.751 --rc geninfo_all_blocks=1 00:04:57.751 --rc geninfo_unexecuted_blocks=1 00:04:57.751 00:04:57.751 ' 00:04:57.751 09:42:22 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:57.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.751 --rc genhtml_branch_coverage=1 00:04:57.751 --rc genhtml_function_coverage=1 00:04:57.751 --rc genhtml_legend=1 00:04:57.751 --rc geninfo_all_blocks=1 00:04:57.751 --rc geninfo_unexecuted_blocks=1 00:04:57.751 00:04:57.751 ' 00:04:57.751 09:42:22 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:57.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.751 --rc genhtml_branch_coverage=1 00:04:57.751 --rc genhtml_function_coverage=1 00:04:57.751 --rc genhtml_legend=1 00:04:57.751 --rc geninfo_all_blocks=1 00:04:57.751 --rc geninfo_unexecuted_blocks=1 00:04:57.751 00:04:57.751 ' 00:04:57.751 09:42:22 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:57.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.751 --rc genhtml_branch_coverage=1 00:04:57.751 --rc genhtml_function_coverage=1 00:04:57.751 --rc genhtml_legend=1 00:04:57.751 --rc geninfo_all_blocks=1 00:04:57.751 --rc geninfo_unexecuted_blocks=1 00:04:57.751 00:04:57.751 ' 00:04:57.751 09:42:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:57.751 09:42:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58012 00:04:57.751 09:42:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.751 09:42:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58012 00:04:57.751 09:42:22 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58012 ']' 00:04:57.751 09:42:22 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.751 09:42:22 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.751 09:42:22 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.752 09:42:22 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.752 09:42:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:57.752 [2024-12-06 09:42:22.881941] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:04:57.752 [2024-12-06 09:42:22.882138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58012 ] 00:04:58.011 [2024-12-06 09:42:23.054477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.011 [2024-12-06 09:42:23.169116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.953 09:42:24 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.953 09:42:24 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:58.953 09:42:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:58.953 09:42:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:58.953 09:42:24 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.953 09:42:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:58.953 { 00:04:58.953 "filename": "/tmp/spdk_mem_dump.txt" 00:04:58.953 } 00:04:58.953 09:42:24 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.953 09:42:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:58.953 DPDK memory size 824.000000 MiB in 1 heap(s) 00:04:58.953 1 heaps totaling size 824.000000 MiB 00:04:58.953 size: 824.000000 MiB heap id: 0 00:04:58.953 end heaps---------- 00:04:58.953 9 mempools totaling size 603.782043 MiB 00:04:58.953 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:58.953 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:58.953 size: 100.555481 MiB name: bdev_io_58012 00:04:58.953 size: 50.003479 MiB name: msgpool_58012 00:04:58.953 size: 36.509338 MiB name: fsdev_io_58012 00:04:58.953 size: 21.763794 MiB name: PDU_Pool 00:04:58.953 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:58.953 size: 4.133484 MiB name: evtpool_58012 00:04:58.953 size: 0.026123 MiB name: Session_Pool 00:04:58.953 end mempools------- 00:04:58.953 6 memzones totaling size 4.142822 MiB 00:04:58.953 size: 1.000366 MiB name: RG_ring_0_58012 00:04:58.953 size: 1.000366 MiB name: RG_ring_1_58012 00:04:58.953 size: 1.000366 MiB name: RG_ring_4_58012 00:04:58.953 size: 1.000366 MiB name: RG_ring_5_58012 00:04:58.953 size: 0.125366 MiB name: RG_ring_2_58012 00:04:58.953 size: 0.015991 MiB name: RG_ring_3_58012 00:04:58.953 end memzones------- 00:04:58.953 09:42:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:58.953 heap id: 0 total size: 824.000000 MiB number of busy elements: 320 number of free elements: 18 00:04:58.953 list of free elements. size: 16.780151 MiB 00:04:58.953 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:58.953 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:58.953 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:58.953 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:58.953 element at address: 0x200019900040 with size: 0.999939 MiB 00:04:58.953 element at address: 0x200019a00000 with size: 0.999084 MiB 00:04:58.953 element at address: 0x200032600000 with size: 0.994324 MiB 00:04:58.953 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:58.953 element at address: 0x200019200000 with size: 0.959656 MiB 00:04:58.953 element at address: 0x200019d00040 with size: 0.936401 MiB 00:04:58.953 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:58.953 element at address: 0x20001b400000 with size: 0.561462 MiB 00:04:58.953 element at address: 0x200000c00000 with size: 0.489197 MiB 00:04:58.953 element at address: 0x200019600000 with size: 0.487976 MiB 00:04:58.953 element at address: 0x200019e00000 with size: 0.485413 MiB 00:04:58.953 element at address: 0x200012c00000 with size: 0.433472 MiB 00:04:58.953 element at address: 0x200028800000 with size: 0.390442 MiB 00:04:58.953 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:58.953 list of standard malloc elements. size: 199.288940 MiB 00:04:58.953 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:58.953 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:58.953 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:58.953 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:58.953 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:04:58.953 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:58.953 element at address: 0x200019deff40 with size: 0.062683 MiB 00:04:58.953 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:58.953 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:58.953 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:04:58.953 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:58.953 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:58.953 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:58.953 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:58.953 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:58.953 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:58.953 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:58.953 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:58.953 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:58.953 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:58.953 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:58.953 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:58.953 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:58.953 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:58.953 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:58.953 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:58.953 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:58.953 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:58.953 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:58.953 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:58.953 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:58.953 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:58.953 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:58.953 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:58.953 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:58.953 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:58.953 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:58.953 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:58.953 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:58.953 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:58.953 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:58.953 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:58.953 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:58.953 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:58.954 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200019affc40 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:04:58.954 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:04:58.955 element at address: 0x200028863f40 with size: 0.000244 MiB 00:04:58.955 element at address: 0x200028864040 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886af80 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886b080 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886b180 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886b280 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886b380 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886b480 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886b580 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886b680 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886b780 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886b880 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886b980 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886be80 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886c080 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886c180 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886c280 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886c380 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886c480 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886c580 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886c680 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886c780 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886c880 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886c980 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886d080 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886d180 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886d280 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886d380 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886d480 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886d580 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886d680 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886d780 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886d880 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886d980 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886da80 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886db80 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886de80 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886df80 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886e080 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886e180 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886e280 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886e380 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886e480 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886e580 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886e680 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886e780 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886e880 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886e980 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886f080 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886f180 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886f280 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886f380 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886f480 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886f580 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886f680 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886f780 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886f880 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886f980 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:04:58.955 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:04:58.955 list of memzone associated elements. size: 607.930908 MiB 00:04:58.955 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:04:58.955 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:58.955 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:04:58.955 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:58.955 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:04:58.955 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58012_0 00:04:58.955 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:58.955 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58012_0 00:04:58.955 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:58.955 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58012_0 00:04:58.955 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:04:58.955 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:58.955 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:04:58.955 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:58.955 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:58.955 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58012_0 00:04:58.955 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:58.955 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58012 00:04:58.955 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:58.955 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58012 00:04:58.955 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:04:58.955 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:58.955 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:04:58.955 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:58.955 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:58.955 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:58.955 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:04:58.955 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:58.956 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:58.956 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58012 00:04:58.956 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:58.956 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58012 00:04:58.956 element at address: 0x200019affd40 with size: 1.000549 MiB 00:04:58.956 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58012 00:04:58.956 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:04:58.956 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58012 00:04:58.956 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:58.956 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58012 00:04:58.956 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:58.956 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58012 00:04:58.956 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:04:58.956 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:58.956 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:04:58.956 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:58.956 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:04:58.956 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:58.956 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:58.956 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58012 00:04:58.956 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:58.956 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58012 00:04:58.956 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:04:58.956 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:58.956 element at address: 0x200028864140 with size: 0.023804 MiB 00:04:58.956 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:58.956 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:58.956 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58012 00:04:58.956 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:04:58.956 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:58.956 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:58.956 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58012 00:04:58.956 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:58.956 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58012 00:04:58.956 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:58.956 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58012 00:04:58.956 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:04:58.956 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:58.956 09:42:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:58.956 09:42:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58012 00:04:58.956 09:42:24 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58012 ']' 00:04:58.956 09:42:24 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58012 00:04:58.956 09:42:24 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:58.956 09:42:24 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:58.956 09:42:24 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58012 00:04:58.956 09:42:24 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:58.956 09:42:24 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:58.956 09:42:24 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58012' 00:04:58.956 killing process with pid 58012 00:04:58.956 09:42:24 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58012 00:04:58.956 09:42:24 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58012 00:05:01.492 00:05:01.492 real 0m4.051s 00:05:01.492 user 0m3.978s 00:05:01.492 sys 0m0.560s 00:05:01.492 09:42:26 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.492 09:42:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:01.492 ************************************ 00:05:01.492 END TEST dpdk_mem_utility 00:05:01.492 ************************************ 00:05:01.492 09:42:26 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:01.492 09:42:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.492 09:42:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.492 09:42:26 -- common/autotest_common.sh@10 -- # set +x 00:05:01.492 ************************************ 00:05:01.492 START TEST event 00:05:01.492 ************************************ 00:05:01.492 09:42:26 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:01.753 * Looking for test storage... 00:05:01.753 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:01.753 09:42:26 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:01.753 09:42:26 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:01.753 09:42:26 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:01.753 09:42:26 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:01.753 09:42:26 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.753 09:42:26 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.753 09:42:26 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.753 09:42:26 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.753 09:42:26 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.753 09:42:26 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.753 09:42:26 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.753 09:42:26 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.753 09:42:26 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.753 09:42:26 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.753 09:42:26 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.753 09:42:26 event -- scripts/common.sh@344 -- # case "$op" in 00:05:01.753 09:42:26 event -- scripts/common.sh@345 -- # : 1 00:05:01.753 09:42:26 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.753 09:42:26 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.753 09:42:26 event -- scripts/common.sh@365 -- # decimal 1 00:05:01.753 09:42:26 event -- scripts/common.sh@353 -- # local d=1 00:05:01.753 09:42:26 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.753 09:42:26 event -- scripts/common.sh@355 -- # echo 1 00:05:01.753 09:42:26 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.753 09:42:26 event -- scripts/common.sh@366 -- # decimal 2 00:05:01.753 09:42:26 event -- scripts/common.sh@353 -- # local d=2 00:05:01.753 09:42:26 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.753 09:42:26 event -- scripts/common.sh@355 -- # echo 2 00:05:01.753 09:42:26 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.753 09:42:26 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.753 09:42:26 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.753 09:42:26 event -- scripts/common.sh@368 -- # return 0 00:05:01.753 09:42:26 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.753 09:42:26 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:01.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.753 --rc genhtml_branch_coverage=1 00:05:01.753 --rc genhtml_function_coverage=1 00:05:01.753 --rc genhtml_legend=1 00:05:01.753 --rc geninfo_all_blocks=1 00:05:01.753 --rc geninfo_unexecuted_blocks=1 00:05:01.753 00:05:01.753 ' 00:05:01.753 09:42:26 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:01.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.753 --rc genhtml_branch_coverage=1 00:05:01.753 --rc genhtml_function_coverage=1 00:05:01.753 --rc genhtml_legend=1 00:05:01.753 --rc geninfo_all_blocks=1 00:05:01.753 --rc geninfo_unexecuted_blocks=1 00:05:01.753 00:05:01.753 ' 00:05:01.753 09:42:26 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:01.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.753 --rc genhtml_branch_coverage=1 00:05:01.753 --rc genhtml_function_coverage=1 00:05:01.753 --rc genhtml_legend=1 00:05:01.753 --rc geninfo_all_blocks=1 00:05:01.753 --rc geninfo_unexecuted_blocks=1 00:05:01.753 00:05:01.753 ' 00:05:01.753 09:42:26 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:01.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.753 --rc genhtml_branch_coverage=1 00:05:01.753 --rc genhtml_function_coverage=1 00:05:01.753 --rc genhtml_legend=1 00:05:01.753 --rc geninfo_all_blocks=1 00:05:01.753 --rc geninfo_unexecuted_blocks=1 00:05:01.753 00:05:01.753 ' 00:05:01.753 09:42:26 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:01.753 09:42:26 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:01.753 09:42:26 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:01.753 09:42:26 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:01.753 09:42:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.753 09:42:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:01.753 ************************************ 00:05:01.753 START TEST event_perf 00:05:01.753 ************************************ 00:05:01.753 09:42:26 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:01.753 Running I/O for 1 seconds...[2024-12-06 09:42:26.955163] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:05:01.753 [2024-12-06 09:42:26.955332] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58120 ] 00:05:02.013 [2024-12-06 09:42:27.126232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:02.013 [2024-12-06 09:42:27.247053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.013 [2024-12-06 09:42:27.247339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.013 [2024-12-06 09:42:27.247450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:02.013 [2024-12-06 09:42:27.247579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.395 Running I/O for 1 seconds... 00:05:03.395 lcore 0: 209116 00:05:03.395 lcore 1: 209116 00:05:03.395 lcore 2: 209115 00:05:03.395 lcore 3: 209116 00:05:03.395 done. 00:05:03.395 00:05:03.395 real 0m1.577s 00:05:03.395 user 0m4.352s 00:05:03.395 sys 0m0.105s 00:05:03.395 ************************************ 00:05:03.395 END TEST event_perf 00:05:03.395 ************************************ 00:05:03.395 09:42:28 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.395 09:42:28 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:03.395 09:42:28 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:03.395 09:42:28 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:03.395 09:42:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.395 09:42:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.395 ************************************ 00:05:03.395 START TEST event_reactor 00:05:03.395 ************************************ 00:05:03.395 09:42:28 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:03.395 [2024-12-06 09:42:28.602517] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:05:03.395 [2024-12-06 09:42:28.602685] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58160 ] 00:05:03.655 [2024-12-06 09:42:28.775455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.655 [2024-12-06 09:42:28.886463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.079 test_start 00:05:05.079 oneshot 00:05:05.079 tick 100 00:05:05.079 tick 100 00:05:05.079 tick 250 00:05:05.079 tick 100 00:05:05.079 tick 100 00:05:05.079 tick 100 00:05:05.079 tick 250 00:05:05.079 tick 500 00:05:05.079 tick 100 00:05:05.079 tick 100 00:05:05.079 tick 250 00:05:05.079 tick 100 00:05:05.079 tick 100 00:05:05.079 test_end 00:05:05.079 00:05:05.079 real 0m1.583s 00:05:05.079 user 0m1.377s 00:05:05.079 sys 0m0.096s 00:05:05.079 09:42:30 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.079 09:42:30 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:05.079 ************************************ 00:05:05.079 END TEST event_reactor 00:05:05.079 ************************************ 00:05:05.079 09:42:30 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:05.079 09:42:30 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:05.079 09:42:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.079 09:42:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.079 ************************************ 00:05:05.079 START TEST event_reactor_perf 00:05:05.079 ************************************ 00:05:05.079 09:42:30 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:05.079 [2024-12-06 09:42:30.246571] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:05:05.079 [2024-12-06 09:42:30.246685] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58196 ] 00:05:05.338 [2024-12-06 09:42:30.421175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.338 [2024-12-06 09:42:30.538694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.720 test_start 00:05:06.720 test_end 00:05:06.720 Performance: 381981 events per second 00:05:06.720 00:05:06.720 real 0m1.557s 00:05:06.720 user 0m1.352s 00:05:06.720 sys 0m0.096s 00:05:06.720 ************************************ 00:05:06.720 END TEST event_reactor_perf 00:05:06.720 ************************************ 00:05:06.720 09:42:31 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.720 09:42:31 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:06.720 09:42:31 event -- event/event.sh@49 -- # uname -s 00:05:06.720 09:42:31 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:06.720 09:42:31 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:06.720 09:42:31 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.720 09:42:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.720 09:42:31 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.720 ************************************ 00:05:06.720 START TEST event_scheduler 00:05:06.720 ************************************ 00:05:06.720 09:42:31 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:06.720 * Looking for test storage... 00:05:06.720 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:06.720 09:42:31 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:06.720 09:42:31 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:06.720 09:42:31 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:06.980 09:42:32 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:06.980 09:42:32 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.980 09:42:32 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.980 09:42:32 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.980 09:42:32 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.980 09:42:32 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.980 09:42:32 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.980 09:42:32 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.980 09:42:32 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.980 09:42:32 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.980 09:42:32 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.980 09:42:32 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.980 09:42:32 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:06.980 09:42:32 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:06.980 09:42:32 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.980 09:42:32 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.980 09:42:32 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:06.980 09:42:32 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:06.981 09:42:32 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.981 09:42:32 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:06.981 09:42:32 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.981 09:42:32 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:06.981 09:42:32 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:06.981 09:42:32 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.981 09:42:32 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:06.981 09:42:32 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.981 09:42:32 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.981 09:42:32 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.981 09:42:32 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:06.981 09:42:32 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.981 09:42:32 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:06.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.981 --rc genhtml_branch_coverage=1 00:05:06.981 --rc genhtml_function_coverage=1 00:05:06.981 --rc genhtml_legend=1 00:05:06.981 --rc geninfo_all_blocks=1 00:05:06.981 --rc geninfo_unexecuted_blocks=1 00:05:06.981 00:05:06.981 ' 00:05:06.981 09:42:32 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:06.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.981 --rc genhtml_branch_coverage=1 00:05:06.981 --rc genhtml_function_coverage=1 00:05:06.981 --rc genhtml_legend=1 00:05:06.981 --rc geninfo_all_blocks=1 00:05:06.981 --rc geninfo_unexecuted_blocks=1 00:05:06.981 00:05:06.981 ' 00:05:06.981 09:42:32 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:06.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.981 --rc genhtml_branch_coverage=1 00:05:06.981 --rc genhtml_function_coverage=1 00:05:06.981 --rc genhtml_legend=1 00:05:06.981 --rc geninfo_all_blocks=1 00:05:06.981 --rc geninfo_unexecuted_blocks=1 00:05:06.981 00:05:06.981 ' 00:05:06.981 09:42:32 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:06.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.981 --rc genhtml_branch_coverage=1 00:05:06.981 --rc genhtml_function_coverage=1 00:05:06.981 --rc genhtml_legend=1 00:05:06.981 --rc geninfo_all_blocks=1 00:05:06.981 --rc geninfo_unexecuted_blocks=1 00:05:06.981 00:05:06.981 ' 00:05:06.981 09:42:32 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:06.981 09:42:32 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58267 00:05:06.981 09:42:32 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:06.981 09:42:32 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.981 09:42:32 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58267 00:05:06.981 09:42:32 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58267 ']' 00:05:06.981 09:42:32 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.981 09:42:32 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.981 09:42:32 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.981 09:42:32 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.981 09:42:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.981 [2024-12-06 09:42:32.126691] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:05:06.981 [2024-12-06 09:42:32.126867] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58267 ] 00:05:07.241 [2024-12-06 09:42:32.304087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:07.241 [2024-12-06 09:42:32.422561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.241 [2024-12-06 09:42:32.422737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.241 [2024-12-06 09:42:32.422880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:07.241 [2024-12-06 09:42:32.422901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:07.811 09:42:32 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.811 09:42:32 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:07.811 09:42:32 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:07.811 09:42:32 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.811 09:42:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:07.811 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:07.811 POWER: Cannot set governor of lcore 0 to userspace 00:05:07.811 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:07.811 POWER: Cannot set governor of lcore 0 to performance 00:05:07.811 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:07.811 POWER: Cannot set governor of lcore 0 to userspace 00:05:07.811 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:07.811 POWER: Cannot set governor of lcore 0 to userspace 00:05:07.811 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:07.811 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:07.811 POWER: Unable to set Power Management Environment for lcore 0 00:05:07.811 [2024-12-06 09:42:32.995729] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:07.811 [2024-12-06 09:42:32.995786] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:07.811 [2024-12-06 09:42:32.995830] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:07.811 [2024-12-06 09:42:32.995890] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:07.811 [2024-12-06 09:42:32.995930] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:07.811 [2024-12-06 09:42:32.995970] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:07.811 09:42:32 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.811 09:42:32 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:07.811 09:42:32 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.811 09:42:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:08.071 [2024-12-06 09:42:33.331484] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:08.071 09:42:33 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.071 09:42:33 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:08.071 09:42:33 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.071 09:42:33 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.071 09:42:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:08.330 ************************************ 00:05:08.330 START TEST scheduler_create_thread 00:05:08.330 ************************************ 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.330 2 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.330 3 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.330 4 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.330 5 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.330 6 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.330 7 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.330 8 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.330 9 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.330 10 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.330 09:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.710 09:42:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.710 09:42:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:09.710 09:42:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:09.710 09:42:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.710 09:42:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.648 09:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.648 09:42:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:10.648 09:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.648 09:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.215 09:42:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.215 09:42:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:11.215 09:42:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:11.215 09:42:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.215 09:42:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.153 ************************************ 00:05:12.153 END TEST scheduler_create_thread 00:05:12.153 ************************************ 00:05:12.153 09:42:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.153 00:05:12.153 real 0m3.884s 00:05:12.153 user 0m0.029s 00:05:12.153 sys 0m0.008s 00:05:12.153 09:42:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.153 09:42:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.153 09:42:37 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:12.153 09:42:37 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58267 00:05:12.153 09:42:37 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58267 ']' 00:05:12.153 09:42:37 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58267 00:05:12.153 09:42:37 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:12.153 09:42:37 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.153 09:42:37 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58267 00:05:12.153 killing process with pid 58267 00:05:12.153 09:42:37 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:12.153 09:42:37 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:12.153 09:42:37 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58267' 00:05:12.153 09:42:37 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58267 00:05:12.153 09:42:37 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58267 00:05:12.412 [2024-12-06 09:42:37.609271] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:13.795 00:05:13.795 real 0m6.953s 00:05:13.795 user 0m14.453s 00:05:13.795 sys 0m0.486s 00:05:13.795 09:42:38 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.795 ************************************ 00:05:13.795 END TEST event_scheduler 00:05:13.795 ************************************ 00:05:13.795 09:42:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:13.795 09:42:38 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:13.795 09:42:38 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:13.795 09:42:38 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.795 09:42:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.795 09:42:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.795 ************************************ 00:05:13.795 START TEST app_repeat 00:05:13.795 ************************************ 00:05:13.795 09:42:38 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:13.795 09:42:38 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.795 09:42:38 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.795 09:42:38 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:13.795 09:42:38 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.795 09:42:38 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:13.795 09:42:38 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:13.795 09:42:38 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:13.795 09:42:38 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58395 00:05:13.795 09:42:38 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:13.795 09:42:38 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.795 09:42:38 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58395' 00:05:13.795 Process app_repeat pid: 58395 00:05:13.795 09:42:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:13.795 09:42:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:13.795 spdk_app_start Round 0 00:05:13.795 09:42:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58395 /var/tmp/spdk-nbd.sock 00:05:13.795 09:42:38 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58395 ']' 00:05:13.795 09:42:38 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:13.795 09:42:38 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.795 09:42:38 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:13.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:13.795 09:42:38 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.795 09:42:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:13.795 [2024-12-06 09:42:38.911513] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:05:13.795 [2024-12-06 09:42:38.911724] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58395 ] 00:05:14.055 [2024-12-06 09:42:39.087222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:14.055 [2024-12-06 09:42:39.241314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.055 [2024-12-06 09:42:39.241358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.624 09:42:39 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.624 09:42:39 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:14.624 09:42:39 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.884 Malloc0 00:05:14.884 09:42:40 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.144 Malloc1 00:05:15.404 09:42:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.404 09:42:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.404 09:42:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.404 09:42:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:15.404 09:42:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.404 09:42:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:15.404 09:42:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.404 09:42:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.404 09:42:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.404 09:42:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:15.404 09:42:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.405 09:42:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:15.405 09:42:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:15.405 09:42:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:15.405 09:42:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.405 09:42:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:15.405 /dev/nbd0 00:05:15.405 09:42:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:15.405 09:42:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:15.405 09:42:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:15.405 09:42:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:15.405 09:42:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:15.405 09:42:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:15.405 09:42:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:15.405 09:42:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:15.405 09:42:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:15.405 09:42:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:15.405 09:42:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.405 1+0 records in 00:05:15.405 1+0 records out 00:05:15.405 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000604953 s, 6.8 MB/s 00:05:15.664 09:42:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.664 09:42:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:15.664 09:42:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.664 09:42:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:15.664 09:42:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:15.664 09:42:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.664 09:42:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.664 09:42:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:15.664 /dev/nbd1 00:05:15.664 09:42:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:15.664 09:42:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:15.664 09:42:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:15.664 09:42:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:15.664 09:42:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:15.664 09:42:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:15.664 09:42:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:15.664 09:42:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:15.664 09:42:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:15.664 09:42:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:15.664 09:42:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.664 1+0 records in 00:05:15.664 1+0 records out 00:05:15.664 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416105 s, 9.8 MB/s 00:05:15.664 09:42:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.664 09:42:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:15.664 09:42:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.664 09:42:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:15.664 09:42:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:15.664 09:42:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.664 09:42:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.924 09:42:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.924 09:42:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.924 09:42:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.924 09:42:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:15.924 { 00:05:15.924 "nbd_device": "/dev/nbd0", 00:05:15.924 "bdev_name": "Malloc0" 00:05:15.924 }, 00:05:15.924 { 00:05:15.924 "nbd_device": "/dev/nbd1", 00:05:15.924 "bdev_name": "Malloc1" 00:05:15.924 } 00:05:15.924 ]' 00:05:15.924 09:42:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:15.924 { 00:05:15.924 "nbd_device": "/dev/nbd0", 00:05:15.924 "bdev_name": "Malloc0" 00:05:15.924 }, 00:05:15.924 { 00:05:15.924 "nbd_device": "/dev/nbd1", 00:05:15.924 "bdev_name": "Malloc1" 00:05:15.924 } 00:05:15.924 ]' 00:05:15.924 09:42:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.924 09:42:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:15.924 /dev/nbd1' 00:05:15.924 09:42:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:15.924 /dev/nbd1' 00:05:15.924 09:42:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.924 09:42:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:15.924 09:42:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:15.924 09:42:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:15.924 09:42:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:15.924 09:42:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:15.924 09:42:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.924 09:42:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.924 09:42:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:15.924 09:42:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:15.924 09:42:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:15.924 09:42:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:15.924 256+0 records in 00:05:15.924 256+0 records out 00:05:15.924 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00648969 s, 162 MB/s 00:05:15.924 09:42:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.924 09:42:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:16.185 256+0 records in 00:05:16.185 256+0 records out 00:05:16.185 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242261 s, 43.3 MB/s 00:05:16.185 09:42:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:16.185 09:42:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:16.185 256+0 records in 00:05:16.185 256+0 records out 00:05:16.185 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029675 s, 35.3 MB/s 00:05:16.185 09:42:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:16.185 09:42:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.185 09:42:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.185 09:42:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:16.185 09:42:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.185 09:42:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:16.185 09:42:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:16.185 09:42:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.185 09:42:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:16.185 09:42:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.185 09:42:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:16.185 09:42:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.185 09:42:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:16.185 09:42:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.185 09:42:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.185 09:42:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:16.185 09:42:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:16.185 09:42:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.185 09:42:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:16.445 09:42:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:16.445 09:42:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:16.445 09:42:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:16.445 09:42:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.445 09:42:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.445 09:42:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:16.445 09:42:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.445 09:42:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.445 09:42:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.445 09:42:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:16.445 09:42:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:16.704 09:42:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:16.704 09:42:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:16.704 09:42:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.704 09:42:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.704 09:42:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:16.704 09:42:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.704 09:42:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.704 09:42:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.704 09:42:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.704 09:42:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.704 09:42:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:16.704 09:42:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:16.704 09:42:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.963 09:42:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:16.963 09:42:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:16.963 09:42:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.963 09:42:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:16.963 09:42:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:16.963 09:42:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:16.963 09:42:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:16.963 09:42:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:16.963 09:42:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:16.963 09:42:41 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:17.223 09:42:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:18.604 [2024-12-06 09:42:43.531257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.604 [2024-12-06 09:42:43.646518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.604 [2024-12-06 09:42:43.646521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.604 [2024-12-06 09:42:43.843760] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:18.604 [2024-12-06 09:42:43.843893] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:20.539 spdk_app_start Round 1 00:05:20.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:20.539 09:42:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:20.539 09:42:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:20.539 09:42:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58395 /var/tmp/spdk-nbd.sock 00:05:20.539 09:42:45 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58395 ']' 00:05:20.539 09:42:45 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:20.539 09:42:45 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.539 09:42:45 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:20.539 09:42:45 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.539 09:42:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:20.539 09:42:45 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.539 09:42:45 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:20.539 09:42:45 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:20.798 Malloc0 00:05:20.798 09:42:45 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.058 Malloc1 00:05:21.058 09:42:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.058 09:42:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.058 09:42:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.058 09:42:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:21.058 09:42:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.058 09:42:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:21.058 09:42:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.058 09:42:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.058 09:42:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.058 09:42:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:21.058 09:42:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.058 09:42:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:21.058 09:42:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:21.058 09:42:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:21.058 09:42:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.058 09:42:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:21.317 /dev/nbd0 00:05:21.317 09:42:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:21.317 09:42:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:21.317 09:42:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:21.317 09:42:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:21.317 09:42:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:21.317 09:42:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:21.317 09:42:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:21.317 09:42:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:21.317 09:42:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:21.317 09:42:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:21.317 09:42:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.317 1+0 records in 00:05:21.317 1+0 records out 00:05:21.318 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251918 s, 16.3 MB/s 00:05:21.318 09:42:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.318 09:42:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:21.318 09:42:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.318 09:42:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:21.318 09:42:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:21.318 09:42:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.318 09:42:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.318 09:42:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:21.318 /dev/nbd1 00:05:21.577 09:42:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:21.577 09:42:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:21.577 09:42:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:21.577 09:42:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:21.577 09:42:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:21.577 09:42:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:21.577 09:42:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:21.577 09:42:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:21.577 09:42:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:21.577 09:42:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:21.577 09:42:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.577 1+0 records in 00:05:21.577 1+0 records out 00:05:21.577 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370667 s, 11.1 MB/s 00:05:21.577 09:42:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.577 09:42:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:21.577 09:42:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.577 09:42:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:21.577 09:42:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:21.577 09:42:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.577 09:42:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.577 09:42:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:21.577 09:42:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.577 09:42:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:21.577 09:42:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:21.577 { 00:05:21.577 "nbd_device": "/dev/nbd0", 00:05:21.577 "bdev_name": "Malloc0" 00:05:21.577 }, 00:05:21.577 { 00:05:21.577 "nbd_device": "/dev/nbd1", 00:05:21.577 "bdev_name": "Malloc1" 00:05:21.577 } 00:05:21.577 ]' 00:05:21.577 09:42:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:21.577 { 00:05:21.577 "nbd_device": "/dev/nbd0", 00:05:21.577 "bdev_name": "Malloc0" 00:05:21.577 }, 00:05:21.577 { 00:05:21.577 "nbd_device": "/dev/nbd1", 00:05:21.577 "bdev_name": "Malloc1" 00:05:21.577 } 00:05:21.577 ]' 00:05:21.577 09:42:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:21.837 09:42:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:21.837 /dev/nbd1' 00:05:21.837 09:42:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:21.837 09:42:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:21.837 /dev/nbd1' 00:05:21.837 09:42:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:21.837 09:42:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:21.837 09:42:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:21.837 09:42:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:21.837 09:42:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:21.837 09:42:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.837 09:42:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:21.837 09:42:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:21.837 09:42:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:21.837 09:42:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:21.837 09:42:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:21.837 256+0 records in 00:05:21.837 256+0 records out 00:05:21.837 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122178 s, 85.8 MB/s 00:05:21.837 09:42:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:21.837 09:42:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:21.837 256+0 records in 00:05:21.837 256+0 records out 00:05:21.838 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215743 s, 48.6 MB/s 00:05:21.838 09:42:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:21.838 09:42:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:21.838 256+0 records in 00:05:21.838 256+0 records out 00:05:21.838 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0263731 s, 39.8 MB/s 00:05:21.838 09:42:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:21.838 09:42:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.838 09:42:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:21.838 09:42:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:21.838 09:42:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:21.838 09:42:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:21.838 09:42:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:21.838 09:42:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:21.838 09:42:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:21.838 09:42:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:21.838 09:42:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:21.838 09:42:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:21.838 09:42:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:21.838 09:42:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.838 09:42:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.838 09:42:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:21.838 09:42:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:21.838 09:42:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:21.838 09:42:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:22.097 09:42:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:22.097 09:42:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:22.097 09:42:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:22.097 09:42:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.097 09:42:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.097 09:42:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:22.097 09:42:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:22.097 09:42:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.097 09:42:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.097 09:42:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:22.408 09:42:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:22.408 09:42:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:22.408 09:42:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:22.408 09:42:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.408 09:42:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.408 09:42:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:22.408 09:42:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:22.408 09:42:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.408 09:42:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.408 09:42:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.408 09:42:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.408 09:42:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:22.408 09:42:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:22.408 09:42:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.769 09:42:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:22.769 09:42:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:22.769 09:42:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.769 09:42:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:22.769 09:42:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:22.769 09:42:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:22.769 09:42:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:22.769 09:42:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:22.769 09:42:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:22.769 09:42:47 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:23.028 09:42:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:24.408 [2024-12-06 09:42:49.244700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.408 [2024-12-06 09:42:49.358100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.408 [2024-12-06 09:42:49.358131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.408 [2024-12-06 09:42:49.548114] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:24.408 [2024-12-06 09:42:49.548223] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:26.317 spdk_app_start Round 2 00:05:26.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:26.317 09:42:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:26.317 09:42:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:26.317 09:42:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58395 /var/tmp/spdk-nbd.sock 00:05:26.317 09:42:51 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58395 ']' 00:05:26.317 09:42:51 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:26.317 09:42:51 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.317 09:42:51 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:26.317 09:42:51 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.317 09:42:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:26.317 09:42:51 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.317 09:42:51 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:26.317 09:42:51 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.317 Malloc0 00:05:26.317 09:42:51 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.577 Malloc1 00:05:26.838 09:42:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.838 09:42:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.838 09:42:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.838 09:42:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:26.838 09:42:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.838 09:42:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:26.838 09:42:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.838 09:42:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.838 09:42:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.838 09:42:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:26.838 09:42:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.838 09:42:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:26.838 09:42:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:26.838 09:42:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:26.838 09:42:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.838 09:42:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:26.838 /dev/nbd0 00:05:26.838 09:42:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:26.838 09:42:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:26.838 09:42:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:26.838 09:42:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:26.838 09:42:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:26.838 09:42:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:26.838 09:42:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:26.838 09:42:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:26.838 09:42:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:26.838 09:42:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:26.838 09:42:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.838 1+0 records in 00:05:26.838 1+0 records out 00:05:26.838 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423187 s, 9.7 MB/s 00:05:26.838 09:42:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.838 09:42:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:26.838 09:42:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.838 09:42:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:26.838 09:42:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:26.838 09:42:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.838 09:42:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.838 09:42:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:27.098 /dev/nbd1 00:05:27.098 09:42:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:27.098 09:42:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:27.098 09:42:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:27.098 09:42:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:27.098 09:42:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:27.098 09:42:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:27.098 09:42:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:27.098 09:42:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:27.098 09:42:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:27.098 09:42:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:27.098 09:42:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.098 1+0 records in 00:05:27.098 1+0 records out 00:05:27.098 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387127 s, 10.6 MB/s 00:05:27.098 09:42:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.098 09:42:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:27.098 09:42:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.098 09:42:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:27.098 09:42:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:27.098 09:42:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.098 09:42:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.098 09:42:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.098 09:42:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.098 09:42:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.358 09:42:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:27.358 { 00:05:27.358 "nbd_device": "/dev/nbd0", 00:05:27.358 "bdev_name": "Malloc0" 00:05:27.358 }, 00:05:27.358 { 00:05:27.358 "nbd_device": "/dev/nbd1", 00:05:27.358 "bdev_name": "Malloc1" 00:05:27.358 } 00:05:27.358 ]' 00:05:27.358 09:42:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:27.358 { 00:05:27.358 "nbd_device": "/dev/nbd0", 00:05:27.358 "bdev_name": "Malloc0" 00:05:27.358 }, 00:05:27.358 { 00:05:27.358 "nbd_device": "/dev/nbd1", 00:05:27.358 "bdev_name": "Malloc1" 00:05:27.358 } 00:05:27.358 ]' 00:05:27.358 09:42:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.358 09:42:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:27.358 /dev/nbd1' 00:05:27.358 09:42:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.358 09:42:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:27.358 /dev/nbd1' 00:05:27.358 09:42:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:27.358 09:42:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:27.358 09:42:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:27.358 09:42:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:27.358 09:42:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:27.358 09:42:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.358 09:42:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.358 09:42:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:27.358 09:42:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:27.358 09:42:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:27.358 09:42:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:27.617 256+0 records in 00:05:27.617 256+0 records out 00:05:27.617 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150059 s, 69.9 MB/s 00:05:27.617 09:42:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.617 09:42:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:27.617 256+0 records in 00:05:27.617 256+0 records out 00:05:27.617 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237511 s, 44.1 MB/s 00:05:27.617 09:42:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.617 09:42:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:27.617 256+0 records in 00:05:27.617 256+0 records out 00:05:27.617 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260948 s, 40.2 MB/s 00:05:27.617 09:42:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:27.617 09:42:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.617 09:42:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.617 09:42:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:27.617 09:42:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:27.617 09:42:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:27.617 09:42:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:27.618 09:42:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.618 09:42:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:27.618 09:42:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.618 09:42:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:27.618 09:42:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:27.618 09:42:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:27.618 09:42:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.618 09:42:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.618 09:42:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:27.618 09:42:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:27.618 09:42:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.618 09:42:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:27.877 09:42:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:27.878 09:42:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:27.878 09:42:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:27.878 09:42:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:27.878 09:42:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:27.878 09:42:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:27.878 09:42:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:27.878 09:42:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:27.878 09:42:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.878 09:42:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:28.138 09:42:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:28.138 09:42:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:28.138 09:42:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:28.138 09:42:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.138 09:42:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.138 09:42:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:28.138 09:42:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.138 09:42:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.138 09:42:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.138 09:42:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.138 09:42:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.138 09:42:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:28.138 09:42:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:28.138 09:42:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.397 09:42:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:28.397 09:42:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:28.397 09:42:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.397 09:42:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:28.397 09:42:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:28.397 09:42:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:28.397 09:42:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:28.397 09:42:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:28.397 09:42:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:28.398 09:42:53 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:28.657 09:42:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:30.038 [2024-12-06 09:42:55.005809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.038 [2024-12-06 09:42:55.119674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.038 [2024-12-06 09:42:55.119675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.296 [2024-12-06 09:42:55.312559] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:30.296 [2024-12-06 09:42:55.312625] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:31.674 09:42:56 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58395 /var/tmp/spdk-nbd.sock 00:05:31.674 09:42:56 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58395 ']' 00:05:31.674 09:42:56 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:31.674 09:42:56 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.674 09:42:56 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:31.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:31.674 09:42:56 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.674 09:42:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:31.933 09:42:57 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.933 09:42:57 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:31.933 09:42:57 event.app_repeat -- event/event.sh@39 -- # killprocess 58395 00:05:31.933 09:42:57 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58395 ']' 00:05:31.933 09:42:57 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58395 00:05:31.933 09:42:57 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:31.933 09:42:57 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.933 09:42:57 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58395 00:05:31.933 killing process with pid 58395 00:05:31.933 09:42:57 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.933 09:42:57 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.933 09:42:57 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58395' 00:05:31.933 09:42:57 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58395 00:05:31.933 09:42:57 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58395 00:05:32.868 spdk_app_start is called in Round 0. 00:05:32.868 Shutdown signal received, stop current app iteration 00:05:32.868 Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 reinitialization... 00:05:32.868 spdk_app_start is called in Round 1. 00:05:32.868 Shutdown signal received, stop current app iteration 00:05:32.868 Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 reinitialization... 00:05:32.868 spdk_app_start is called in Round 2. 00:05:32.868 Shutdown signal received, stop current app iteration 00:05:32.868 Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 reinitialization... 00:05:32.868 spdk_app_start is called in Round 3. 00:05:32.868 Shutdown signal received, stop current app iteration 00:05:32.868 09:42:58 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:32.868 09:42:58 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:32.868 00:05:32.868 real 0m19.293s 00:05:32.868 user 0m41.246s 00:05:32.868 sys 0m2.801s 00:05:32.868 09:42:58 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.868 09:42:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:33.127 ************************************ 00:05:33.127 END TEST app_repeat 00:05:33.127 ************************************ 00:05:33.127 09:42:58 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:33.127 09:42:58 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:33.127 09:42:58 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.127 09:42:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.127 09:42:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.127 ************************************ 00:05:33.127 START TEST cpu_locks 00:05:33.127 ************************************ 00:05:33.127 09:42:58 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:33.127 * Looking for test storage... 00:05:33.127 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:33.127 09:42:58 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:33.127 09:42:58 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:33.127 09:42:58 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:33.387 09:42:58 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:33.387 09:42:58 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.387 09:42:58 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.387 09:42:58 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.387 09:42:58 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.387 09:42:58 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.387 09:42:58 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.388 09:42:58 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.388 09:42:58 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.388 09:42:58 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.388 09:42:58 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.388 09:42:58 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.388 09:42:58 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:33.388 09:42:58 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:33.388 09:42:58 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.388 09:42:58 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.388 09:42:58 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:33.388 09:42:58 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:33.388 09:42:58 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.388 09:42:58 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:33.388 09:42:58 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.388 09:42:58 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:33.388 09:42:58 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:33.388 09:42:58 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.388 09:42:58 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:33.388 09:42:58 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.388 09:42:58 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.388 09:42:58 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.388 09:42:58 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:33.388 09:42:58 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.388 09:42:58 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:33.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.388 --rc genhtml_branch_coverage=1 00:05:33.388 --rc genhtml_function_coverage=1 00:05:33.388 --rc genhtml_legend=1 00:05:33.388 --rc geninfo_all_blocks=1 00:05:33.388 --rc geninfo_unexecuted_blocks=1 00:05:33.388 00:05:33.388 ' 00:05:33.388 09:42:58 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:33.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.388 --rc genhtml_branch_coverage=1 00:05:33.388 --rc genhtml_function_coverage=1 00:05:33.388 --rc genhtml_legend=1 00:05:33.388 --rc geninfo_all_blocks=1 00:05:33.388 --rc geninfo_unexecuted_blocks=1 00:05:33.388 00:05:33.388 ' 00:05:33.388 09:42:58 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:33.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.388 --rc genhtml_branch_coverage=1 00:05:33.388 --rc genhtml_function_coverage=1 00:05:33.388 --rc genhtml_legend=1 00:05:33.388 --rc geninfo_all_blocks=1 00:05:33.388 --rc geninfo_unexecuted_blocks=1 00:05:33.388 00:05:33.388 ' 00:05:33.388 09:42:58 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:33.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.388 --rc genhtml_branch_coverage=1 00:05:33.388 --rc genhtml_function_coverage=1 00:05:33.388 --rc genhtml_legend=1 00:05:33.388 --rc geninfo_all_blocks=1 00:05:33.388 --rc geninfo_unexecuted_blocks=1 00:05:33.388 00:05:33.388 ' 00:05:33.388 09:42:58 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:33.388 09:42:58 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:33.388 09:42:58 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:33.388 09:42:58 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:33.388 09:42:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.388 09:42:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.388 09:42:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.388 ************************************ 00:05:33.388 START TEST default_locks 00:05:33.388 ************************************ 00:05:33.388 09:42:58 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:33.388 09:42:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58841 00:05:33.388 09:42:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.388 09:42:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58841 00:05:33.388 09:42:58 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58841 ']' 00:05:33.388 09:42:58 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.388 09:42:58 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.388 09:42:58 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.388 09:42:58 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.388 09:42:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.388 [2024-12-06 09:42:58.545373] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:05:33.388 [2024-12-06 09:42:58.545579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58841 ] 00:05:33.647 [2024-12-06 09:42:58.717339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.647 [2024-12-06 09:42:58.825830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.586 09:42:59 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.586 09:42:59 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:34.586 09:42:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58841 00:05:34.586 09:42:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58841 00:05:34.586 09:42:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:34.845 09:43:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58841 00:05:34.845 09:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58841 ']' 00:05:34.845 09:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58841 00:05:34.845 09:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:34.845 09:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.845 09:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58841 00:05:35.104 killing process with pid 58841 00:05:35.104 09:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.104 09:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.104 09:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58841' 00:05:35.104 09:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58841 00:05:35.104 09:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58841 00:05:37.646 09:43:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58841 00:05:37.646 09:43:02 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:37.646 09:43:02 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58841 00:05:37.646 09:43:02 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:37.646 09:43:02 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:37.646 09:43:02 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:37.646 09:43:02 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:37.646 09:43:02 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58841 00:05:37.646 09:43:02 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58841 ']' 00:05:37.646 09:43:02 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.646 ERROR: process (pid: 58841) is no longer running 00:05:37.646 09:43:02 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.646 09:43:02 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.646 09:43:02 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.646 09:43:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.646 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58841) - No such process 00:05:37.646 09:43:02 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.646 09:43:02 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:37.646 09:43:02 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:37.646 09:43:02 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:37.646 09:43:02 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:37.646 09:43:02 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:37.646 09:43:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:37.646 09:43:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:37.646 09:43:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:37.646 09:43:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:37.646 00:05:37.646 real 0m4.045s 00:05:37.646 user 0m3.998s 00:05:37.646 sys 0m0.637s 00:05:37.646 09:43:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.646 09:43:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.646 ************************************ 00:05:37.646 END TEST default_locks 00:05:37.646 ************************************ 00:05:37.646 09:43:02 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:37.646 09:43:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.646 09:43:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.646 09:43:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.646 ************************************ 00:05:37.646 START TEST default_locks_via_rpc 00:05:37.646 ************************************ 00:05:37.646 09:43:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:37.646 09:43:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58912 00:05:37.646 09:43:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:37.646 09:43:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58912 00:05:37.646 09:43:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58912 ']' 00:05:37.646 09:43:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.646 09:43:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.646 09:43:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.646 09:43:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.646 09:43:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.646 [2024-12-06 09:43:02.661983] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:05:37.646 [2024-12-06 09:43:02.662160] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58912 ] 00:05:37.646 [2024-12-06 09:43:02.836193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.908 [2024-12-06 09:43:02.949915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.843 09:43:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.843 09:43:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:38.843 09:43:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:38.843 09:43:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.843 09:43:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.843 09:43:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.843 09:43:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:38.843 09:43:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:38.843 09:43:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:38.843 09:43:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:38.843 09:43:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:38.843 09:43:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.843 09:43:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.843 09:43:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.843 09:43:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58912 00:05:38.843 09:43:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58912 00:05:38.843 09:43:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:38.843 09:43:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58912 00:05:38.843 09:43:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58912 ']' 00:05:38.843 09:43:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58912 00:05:38.843 09:43:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:38.843 09:43:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.843 09:43:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58912 00:05:39.101 killing process with pid 58912 00:05:39.101 09:43:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.101 09:43:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.101 09:43:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58912' 00:05:39.101 09:43:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58912 00:05:39.101 09:43:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58912 00:05:41.636 00:05:41.636 real 0m3.918s 00:05:41.636 user 0m3.847s 00:05:41.636 sys 0m0.586s 00:05:41.636 09:43:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.636 09:43:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.636 ************************************ 00:05:41.636 END TEST default_locks_via_rpc 00:05:41.636 ************************************ 00:05:41.636 09:43:06 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:41.636 09:43:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.636 09:43:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.636 09:43:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.636 ************************************ 00:05:41.636 START TEST non_locking_app_on_locked_coremask 00:05:41.636 ************************************ 00:05:41.636 09:43:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:41.636 09:43:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58986 00:05:41.636 09:43:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:41.636 09:43:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58986 /var/tmp/spdk.sock 00:05:41.636 09:43:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58986 ']' 00:05:41.636 09:43:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.636 09:43:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.636 09:43:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.636 09:43:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.636 09:43:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.636 [2024-12-06 09:43:06.648698] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:05:41.636 [2024-12-06 09:43:06.648960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58986 ] 00:05:41.636 [2024-12-06 09:43:06.806564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.894 [2024-12-06 09:43:06.920191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.834 09:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.834 09:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:42.834 09:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:42.834 09:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59002 00:05:42.834 09:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59002 /var/tmp/spdk2.sock 00:05:42.834 09:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59002 ']' 00:05:42.834 09:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:42.834 09:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.834 09:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:42.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:42.834 09:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.834 09:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.834 [2024-12-06 09:43:07.850574] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:05:42.834 [2024-12-06 09:43:07.851097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59002 ] 00:05:42.834 [2024-12-06 09:43:08.021163] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:42.834 [2024-12-06 09:43:08.021244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.094 [2024-12-06 09:43:08.254289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.633 09:43:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.633 09:43:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:45.633 09:43:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58986 00:05:45.633 09:43:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58986 00:05:45.633 09:43:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.633 09:43:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58986 00:05:45.633 09:43:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58986 ']' 00:05:45.633 09:43:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58986 00:05:45.633 09:43:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:45.633 09:43:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.892 09:43:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58986 00:05:45.892 killing process with pid 58986 00:05:45.892 09:43:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.892 09:43:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.892 09:43:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58986' 00:05:45.892 09:43:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58986 00:05:45.892 09:43:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58986 00:05:51.170 09:43:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59002 00:05:51.170 09:43:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59002 ']' 00:05:51.170 09:43:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59002 00:05:51.170 09:43:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:51.170 09:43:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.170 09:43:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59002 00:05:51.170 killing process with pid 59002 00:05:51.170 09:43:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.170 09:43:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.170 09:43:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59002' 00:05:51.170 09:43:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59002 00:05:51.170 09:43:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59002 00:05:53.077 00:05:53.077 real 0m11.458s 00:05:53.077 user 0m11.623s 00:05:53.077 sys 0m1.291s 00:05:53.077 09:43:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.077 ************************************ 00:05:53.077 END TEST non_locking_app_on_locked_coremask 00:05:53.077 ************************************ 00:05:53.077 09:43:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.077 09:43:18 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:53.077 09:43:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.077 09:43:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.077 09:43:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.077 ************************************ 00:05:53.077 START TEST locking_app_on_unlocked_coremask 00:05:53.077 ************************************ 00:05:53.077 09:43:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:53.077 09:43:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59152 00:05:53.077 09:43:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:53.077 09:43:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59152 /var/tmp/spdk.sock 00:05:53.077 09:43:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59152 ']' 00:05:53.077 09:43:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.077 09:43:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.077 09:43:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.077 09:43:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.077 09:43:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.077 [2024-12-06 09:43:18.174986] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:05:53.077 [2024-12-06 09:43:18.175105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59152 ] 00:05:53.336 [2024-12-06 09:43:18.348039] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:53.336 [2024-12-06 09:43:18.348120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.336 [2024-12-06 09:43:18.459011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.273 09:43:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.273 09:43:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:54.273 09:43:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59168 00:05:54.273 09:43:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:54.273 09:43:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59168 /var/tmp/spdk2.sock 00:05:54.273 09:43:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59168 ']' 00:05:54.273 09:43:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:54.273 09:43:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.273 09:43:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:54.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:54.273 09:43:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.273 09:43:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.273 [2024-12-06 09:43:19.390323] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:05:54.273 [2024-12-06 09:43:19.390551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59168 ] 00:05:54.534 [2024-12-06 09:43:19.560269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.534 [2024-12-06 09:43:19.797413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.077 09:43:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.077 09:43:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:57.077 09:43:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59168 00:05:57.077 09:43:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59168 00:05:57.077 09:43:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.336 09:43:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59152 00:05:57.336 09:43:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59152 ']' 00:05:57.336 09:43:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59152 00:05:57.336 09:43:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:57.336 09:43:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.336 09:43:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59152 00:05:57.336 killing process with pid 59152 00:05:57.336 09:43:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.336 09:43:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.336 09:43:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59152' 00:05:57.336 09:43:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59152 00:05:57.336 09:43:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59152 00:06:02.603 09:43:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59168 00:06:02.603 09:43:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59168 ']' 00:06:02.603 09:43:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59168 00:06:02.603 09:43:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:02.603 09:43:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.603 09:43:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59168 00:06:02.603 killing process with pid 59168 00:06:02.603 09:43:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.603 09:43:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.603 09:43:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59168' 00:06:02.603 09:43:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59168 00:06:02.603 09:43:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59168 00:06:04.552 00:06:04.552 real 0m11.424s 00:06:04.552 user 0m11.581s 00:06:04.552 sys 0m1.279s 00:06:04.552 ************************************ 00:06:04.552 END TEST locking_app_on_unlocked_coremask 00:06:04.552 ************************************ 00:06:04.552 09:43:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.552 09:43:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.552 09:43:29 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:04.552 09:43:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.552 09:43:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.552 09:43:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.552 ************************************ 00:06:04.552 START TEST locking_app_on_locked_coremask 00:06:04.552 ************************************ 00:06:04.552 09:43:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:04.552 09:43:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59319 00:06:04.552 09:43:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.552 09:43:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59319 /var/tmp/spdk.sock 00:06:04.552 09:43:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59319 ']' 00:06:04.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.552 09:43:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.552 09:43:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.552 09:43:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.552 09:43:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.552 09:43:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.552 [2024-12-06 09:43:29.671107] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:04.552 [2024-12-06 09:43:29.671262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59319 ] 00:06:04.812 [2024-12-06 09:43:29.839781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.812 [2024-12-06 09:43:29.951442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.753 09:43:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.753 09:43:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:05.753 09:43:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:05.753 09:43:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59335 00:06:05.753 09:43:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59335 /var/tmp/spdk2.sock 00:06:05.753 09:43:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:05.753 09:43:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59335 /var/tmp/spdk2.sock 00:06:05.753 09:43:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:05.753 09:43:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.753 09:43:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:05.753 09:43:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.753 09:43:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59335 /var/tmp/spdk2.sock 00:06:05.753 09:43:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59335 ']' 00:06:05.753 09:43:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.753 09:43:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.753 09:43:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.753 09:43:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.753 09:43:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.753 [2024-12-06 09:43:30.849818] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:05.753 [2024-12-06 09:43:30.850025] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59335 ] 00:06:05.753 [2024-12-06 09:43:31.021315] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59319 has claimed it. 00:06:05.753 [2024-12-06 09:43:31.021427] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:06.322 ERROR: process (pid: 59335) is no longer running 00:06:06.322 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59335) - No such process 00:06:06.322 09:43:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.322 09:43:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:06.322 09:43:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:06.322 09:43:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:06.322 09:43:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:06.322 09:43:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:06.322 09:43:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59319 00:06:06.322 09:43:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59319 00:06:06.322 09:43:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.583 09:43:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59319 00:06:06.583 09:43:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59319 ']' 00:06:06.583 09:43:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59319 00:06:06.583 09:43:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:06.583 09:43:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.583 09:43:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59319 00:06:06.583 killing process with pid 59319 00:06:06.583 09:43:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.583 09:43:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.583 09:43:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59319' 00:06:06.583 09:43:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59319 00:06:06.583 09:43:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59319 00:06:09.120 ************************************ 00:06:09.120 END TEST locking_app_on_locked_coremask 00:06:09.120 ************************************ 00:06:09.120 00:06:09.120 real 0m4.524s 00:06:09.120 user 0m4.630s 00:06:09.120 sys 0m0.689s 00:06:09.120 09:43:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.120 09:43:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.120 09:43:34 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:09.120 09:43:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.120 09:43:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.120 09:43:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.120 ************************************ 00:06:09.120 START TEST locking_overlapped_coremask 00:06:09.120 ************************************ 00:06:09.120 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:09.120 09:43:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59405 00:06:09.120 09:43:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:09.120 09:43:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59405 /var/tmp/spdk.sock 00:06:09.120 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59405 ']' 00:06:09.120 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.120 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.120 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.120 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.120 09:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.120 [2024-12-06 09:43:34.251642] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:09.120 [2024-12-06 09:43:34.251762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59405 ] 00:06:09.379 [2024-12-06 09:43:34.426136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.379 [2024-12-06 09:43:34.550332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.379 [2024-12-06 09:43:34.550470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.379 [2024-12-06 09:43:34.550521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.335 09:43:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.335 09:43:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:10.335 09:43:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59423 00:06:10.335 09:43:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:10.335 09:43:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59423 /var/tmp/spdk2.sock 00:06:10.335 09:43:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:10.335 09:43:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59423 /var/tmp/spdk2.sock 00:06:10.335 09:43:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:10.335 09:43:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.335 09:43:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:10.335 09:43:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:10.335 09:43:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59423 /var/tmp/spdk2.sock 00:06:10.335 09:43:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59423 ']' 00:06:10.335 09:43:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.335 09:43:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.335 09:43:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.335 09:43:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.335 09:43:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.335 [2024-12-06 09:43:35.524884] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:10.335 [2024-12-06 09:43:35.525076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59423 ] 00:06:10.593 [2024-12-06 09:43:35.694935] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59405 has claimed it. 00:06:10.593 [2024-12-06 09:43:35.694995] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:11.211 ERROR: process (pid: 59423) is no longer running 00:06:11.211 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59423) - No such process 00:06:11.211 09:43:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.211 09:43:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:11.211 09:43:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:11.211 09:43:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:11.211 09:43:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:11.211 09:43:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:11.211 09:43:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:11.211 09:43:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:11.211 09:43:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:11.211 09:43:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:11.211 09:43:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59405 00:06:11.211 09:43:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59405 ']' 00:06:11.211 09:43:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59405 00:06:11.211 09:43:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:11.211 09:43:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.211 09:43:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59405 00:06:11.212 killing process with pid 59405 00:06:11.212 09:43:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.212 09:43:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.212 09:43:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59405' 00:06:11.212 09:43:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59405 00:06:11.212 09:43:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59405 00:06:13.751 00:06:13.751 real 0m4.522s 00:06:13.751 user 0m12.296s 00:06:13.751 sys 0m0.578s 00:06:13.751 ************************************ 00:06:13.751 END TEST locking_overlapped_coremask 00:06:13.751 ************************************ 00:06:13.752 09:43:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.752 09:43:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.752 09:43:38 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:13.752 09:43:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.752 09:43:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.752 09:43:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.752 ************************************ 00:06:13.752 START TEST locking_overlapped_coremask_via_rpc 00:06:13.752 ************************************ 00:06:13.752 09:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:13.752 09:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:13.752 09:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59493 00:06:13.752 09:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59493 /var/tmp/spdk.sock 00:06:13.752 09:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59493 ']' 00:06:13.752 09:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.752 09:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.752 09:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.752 09:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.752 09:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.752 [2024-12-06 09:43:38.850558] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:13.752 [2024-12-06 09:43:38.850816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59493 ] 00:06:14.011 [2024-12-06 09:43:39.026152] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:14.011 [2024-12-06 09:43:39.026340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:14.011 [2024-12-06 09:43:39.149973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.011 [2024-12-06 09:43:39.150185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.011 [2024-12-06 09:43:39.150240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.951 09:43:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.951 09:43:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:14.951 09:43:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59512 00:06:14.951 09:43:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59512 /var/tmp/spdk2.sock 00:06:14.951 09:43:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59512 ']' 00:06:14.951 09:43:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.951 09:43:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:14.951 09:43:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.951 09:43:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.951 09:43:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.951 09:43:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.951 [2024-12-06 09:43:40.140292] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:14.951 [2024-12-06 09:43:40.140847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59512 ] 00:06:15.210 [2024-12-06 09:43:40.318300] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:15.210 [2024-12-06 09:43:40.318365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:15.470 [2024-12-06 09:43:40.574340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:15.470 [2024-12-06 09:43:40.574464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.470 [2024-12-06 09:43:40.574501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.015 [2024-12-06 09:43:42.759374] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59493 has claimed it. 00:06:18.015 request: 00:06:18.015 { 00:06:18.015 "method": "framework_enable_cpumask_locks", 00:06:18.015 "req_id": 1 00:06:18.015 } 00:06:18.015 Got JSON-RPC error response 00:06:18.015 response: 00:06:18.015 { 00:06:18.015 "code": -32603, 00:06:18.015 "message": "Failed to claim CPU core: 2" 00:06:18.015 } 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59493 /var/tmp/spdk.sock 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59493 ']' 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59512 /var/tmp/spdk2.sock 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59512 ']' 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.015 09:43:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.015 09:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.015 09:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:18.015 09:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:18.015 09:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:18.015 09:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:18.015 09:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:18.015 00:06:18.015 real 0m4.466s 00:06:18.015 user 0m1.318s 00:06:18.015 sys 0m0.203s 00:06:18.015 09:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.015 09:43:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.015 ************************************ 00:06:18.015 END TEST locking_overlapped_coremask_via_rpc 00:06:18.015 ************************************ 00:06:18.015 09:43:43 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:18.015 09:43:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59493 ]] 00:06:18.015 09:43:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59493 00:06:18.015 09:43:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59493 ']' 00:06:18.015 09:43:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59493 00:06:18.015 09:43:43 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:18.015 09:43:43 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.015 09:43:43 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59493 00:06:18.274 09:43:43 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.274 09:43:43 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.274 09:43:43 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59493' 00:06:18.274 killing process with pid 59493 00:06:18.274 09:43:43 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59493 00:06:18.274 09:43:43 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59493 00:06:20.818 09:43:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59512 ]] 00:06:20.818 09:43:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59512 00:06:20.818 09:43:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59512 ']' 00:06:20.818 09:43:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59512 00:06:20.818 09:43:45 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:20.818 09:43:45 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.818 09:43:45 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59512 00:06:20.818 09:43:45 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:20.818 killing process with pid 59512 00:06:20.818 09:43:45 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:20.818 09:43:45 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59512' 00:06:20.818 09:43:45 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59512 00:06:20.818 09:43:45 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59512 00:06:23.358 09:43:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:23.358 09:43:48 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:23.358 09:43:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59493 ]] 00:06:23.358 Process with pid 59493 is not found 00:06:23.358 09:43:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59493 00:06:23.358 09:43:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59493 ']' 00:06:23.358 09:43:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59493 00:06:23.358 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59493) - No such process 00:06:23.358 09:43:48 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59493 is not found' 00:06:23.358 09:43:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59512 ]] 00:06:23.358 09:43:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59512 00:06:23.358 09:43:48 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59512 ']' 00:06:23.358 09:43:48 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59512 00:06:23.358 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59512) - No such process 00:06:23.358 09:43:48 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59512 is not found' 00:06:23.358 Process with pid 59512 is not found 00:06:23.358 09:43:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:23.358 00:06:23.358 real 0m50.091s 00:06:23.358 user 1m26.166s 00:06:23.359 sys 0m6.528s 00:06:23.359 09:43:48 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.359 09:43:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.359 ************************************ 00:06:23.359 END TEST cpu_locks 00:06:23.359 ************************************ 00:06:23.359 00:06:23.359 real 1m21.679s 00:06:23.359 user 2m29.196s 00:06:23.359 sys 0m10.498s 00:06:23.359 09:43:48 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.359 09:43:48 event -- common/autotest_common.sh@10 -- # set +x 00:06:23.359 ************************************ 00:06:23.359 END TEST event 00:06:23.359 ************************************ 00:06:23.359 09:43:48 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:23.359 09:43:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.359 09:43:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.359 09:43:48 -- common/autotest_common.sh@10 -- # set +x 00:06:23.359 ************************************ 00:06:23.359 START TEST thread 00:06:23.359 ************************************ 00:06:23.359 09:43:48 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:23.359 * Looking for test storage... 00:06:23.359 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:23.359 09:43:48 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:23.359 09:43:48 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:23.359 09:43:48 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:23.359 09:43:48 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:23.359 09:43:48 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.359 09:43:48 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.359 09:43:48 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.359 09:43:48 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.359 09:43:48 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.359 09:43:48 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.359 09:43:48 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.359 09:43:48 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.359 09:43:48 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.359 09:43:48 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.359 09:43:48 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.359 09:43:48 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:23.359 09:43:48 thread -- scripts/common.sh@345 -- # : 1 00:06:23.359 09:43:48 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.359 09:43:48 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.359 09:43:48 thread -- scripts/common.sh@365 -- # decimal 1 00:06:23.359 09:43:48 thread -- scripts/common.sh@353 -- # local d=1 00:06:23.359 09:43:48 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.359 09:43:48 thread -- scripts/common.sh@355 -- # echo 1 00:06:23.359 09:43:48 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.359 09:43:48 thread -- scripts/common.sh@366 -- # decimal 2 00:06:23.619 09:43:48 thread -- scripts/common.sh@353 -- # local d=2 00:06:23.619 09:43:48 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.619 09:43:48 thread -- scripts/common.sh@355 -- # echo 2 00:06:23.619 09:43:48 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.619 09:43:48 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.619 09:43:48 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.619 09:43:48 thread -- scripts/common.sh@368 -- # return 0 00:06:23.619 09:43:48 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.619 09:43:48 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:23.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.619 --rc genhtml_branch_coverage=1 00:06:23.619 --rc genhtml_function_coverage=1 00:06:23.619 --rc genhtml_legend=1 00:06:23.619 --rc geninfo_all_blocks=1 00:06:23.619 --rc geninfo_unexecuted_blocks=1 00:06:23.619 00:06:23.619 ' 00:06:23.619 09:43:48 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:23.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.619 --rc genhtml_branch_coverage=1 00:06:23.619 --rc genhtml_function_coverage=1 00:06:23.619 --rc genhtml_legend=1 00:06:23.619 --rc geninfo_all_blocks=1 00:06:23.619 --rc geninfo_unexecuted_blocks=1 00:06:23.619 00:06:23.619 ' 00:06:23.619 09:43:48 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:23.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.619 --rc genhtml_branch_coverage=1 00:06:23.619 --rc genhtml_function_coverage=1 00:06:23.619 --rc genhtml_legend=1 00:06:23.619 --rc geninfo_all_blocks=1 00:06:23.619 --rc geninfo_unexecuted_blocks=1 00:06:23.619 00:06:23.619 ' 00:06:23.619 09:43:48 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:23.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.619 --rc genhtml_branch_coverage=1 00:06:23.619 --rc genhtml_function_coverage=1 00:06:23.619 --rc genhtml_legend=1 00:06:23.619 --rc geninfo_all_blocks=1 00:06:23.619 --rc geninfo_unexecuted_blocks=1 00:06:23.619 00:06:23.619 ' 00:06:23.619 09:43:48 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:23.619 09:43:48 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:23.619 09:43:48 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.619 09:43:48 thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.619 ************************************ 00:06:23.619 START TEST thread_poller_perf 00:06:23.619 ************************************ 00:06:23.619 09:43:48 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:23.619 [2024-12-06 09:43:48.692781] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:23.619 [2024-12-06 09:43:48.692897] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59713 ] 00:06:23.619 [2024-12-06 09:43:48.868486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.880 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:23.880 [2024-12-06 09:43:48.980335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.272 [2024-12-06T09:43:50.545Z] ====================================== 00:06:25.272 [2024-12-06T09:43:50.545Z] busy:2301403742 (cyc) 00:06:25.272 [2024-12-06T09:43:50.545Z] total_run_count: 405000 00:06:25.272 [2024-12-06T09:43:50.545Z] tsc_hz: 2290000000 (cyc) 00:06:25.272 [2024-12-06T09:43:50.545Z] ====================================== 00:06:25.272 [2024-12-06T09:43:50.545Z] poller_cost: 5682 (cyc), 2481 (nsec) 00:06:25.272 00:06:25.272 real 0m1.568s 00:06:25.272 user 0m1.362s 00:06:25.272 sys 0m0.100s 00:06:25.272 09:43:50 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.272 ************************************ 00:06:25.272 END TEST thread_poller_perf 00:06:25.272 ************************************ 00:06:25.272 09:43:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:25.272 09:43:50 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:25.272 09:43:50 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:25.272 09:43:50 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.272 09:43:50 thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.272 ************************************ 00:06:25.272 START TEST thread_poller_perf 00:06:25.272 ************************************ 00:06:25.272 09:43:50 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:25.272 [2024-12-06 09:43:50.329666] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:25.272 [2024-12-06 09:43:50.329775] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59744 ] 00:06:25.272 [2024-12-06 09:43:50.499235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.531 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:25.531 [2024-12-06 09:43:50.615006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.910 [2024-12-06T09:43:52.183Z] ====================================== 00:06:26.910 [2024-12-06T09:43:52.183Z] busy:2294028378 (cyc) 00:06:26.910 [2024-12-06T09:43:52.183Z] total_run_count: 4862000 00:06:26.910 [2024-12-06T09:43:52.183Z] tsc_hz: 2290000000 (cyc) 00:06:26.910 [2024-12-06T09:43:52.183Z] ====================================== 00:06:26.910 [2024-12-06T09:43:52.183Z] poller_cost: 471 (cyc), 205 (nsec) 00:06:26.910 00:06:26.910 real 0m1.563s 00:06:26.910 user 0m1.348s 00:06:26.910 sys 0m0.105s 00:06:26.910 09:43:51 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.910 09:43:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:26.910 ************************************ 00:06:26.910 END TEST thread_poller_perf 00:06:26.910 ************************************ 00:06:26.910 09:43:51 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:26.910 ************************************ 00:06:26.910 END TEST thread 00:06:26.910 ************************************ 00:06:26.910 00:06:26.910 real 0m3.481s 00:06:26.910 user 0m2.877s 00:06:26.910 sys 0m0.405s 00:06:26.910 09:43:51 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.910 09:43:51 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.910 09:43:51 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:26.910 09:43:51 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:26.910 09:43:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.910 09:43:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.910 09:43:51 -- common/autotest_common.sh@10 -- # set +x 00:06:26.910 ************************************ 00:06:26.910 START TEST app_cmdline 00:06:26.910 ************************************ 00:06:26.910 09:43:51 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:26.910 * Looking for test storage... 00:06:26.910 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:26.910 09:43:52 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:26.910 09:43:52 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:26.910 09:43:52 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:26.910 09:43:52 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:26.910 09:43:52 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.910 09:43:52 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.910 09:43:52 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.910 09:43:52 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.910 09:43:52 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.910 09:43:52 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.910 09:43:52 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.910 09:43:52 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.910 09:43:52 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.910 09:43:52 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.910 09:43:52 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.910 09:43:52 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:26.910 09:43:52 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:26.910 09:43:52 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.910 09:43:52 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.910 09:43:52 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:26.910 09:43:52 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:26.910 09:43:52 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.910 09:43:52 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:26.910 09:43:52 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.910 09:43:52 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:26.910 09:43:52 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:26.910 09:43:52 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.910 09:43:52 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:27.168 09:43:52 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.168 09:43:52 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.168 09:43:52 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.168 09:43:52 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:27.168 09:43:52 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.168 09:43:52 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:27.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.168 --rc genhtml_branch_coverage=1 00:06:27.168 --rc genhtml_function_coverage=1 00:06:27.168 --rc genhtml_legend=1 00:06:27.168 --rc geninfo_all_blocks=1 00:06:27.168 --rc geninfo_unexecuted_blocks=1 00:06:27.168 00:06:27.168 ' 00:06:27.168 09:43:52 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:27.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.168 --rc genhtml_branch_coverage=1 00:06:27.168 --rc genhtml_function_coverage=1 00:06:27.168 --rc genhtml_legend=1 00:06:27.168 --rc geninfo_all_blocks=1 00:06:27.168 --rc geninfo_unexecuted_blocks=1 00:06:27.168 00:06:27.168 ' 00:06:27.168 09:43:52 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:27.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.168 --rc genhtml_branch_coverage=1 00:06:27.168 --rc genhtml_function_coverage=1 00:06:27.168 --rc genhtml_legend=1 00:06:27.168 --rc geninfo_all_blocks=1 00:06:27.168 --rc geninfo_unexecuted_blocks=1 00:06:27.168 00:06:27.168 ' 00:06:27.168 09:43:52 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:27.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.168 --rc genhtml_branch_coverage=1 00:06:27.168 --rc genhtml_function_coverage=1 00:06:27.168 --rc genhtml_legend=1 00:06:27.168 --rc geninfo_all_blocks=1 00:06:27.168 --rc geninfo_unexecuted_blocks=1 00:06:27.168 00:06:27.168 ' 00:06:27.168 09:43:52 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:27.168 09:43:52 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59833 00:06:27.168 09:43:52 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:27.168 09:43:52 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59833 00:06:27.168 09:43:52 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59833 ']' 00:06:27.168 09:43:52 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.168 09:43:52 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.168 09:43:52 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.168 09:43:52 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.168 09:43:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:27.168 [2024-12-06 09:43:52.284569] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:27.169 [2024-12-06 09:43:52.284703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59833 ] 00:06:27.426 [2024-12-06 09:43:52.457970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.426 [2024-12-06 09:43:52.571720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.365 09:43:53 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.365 09:43:53 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:28.365 09:43:53 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:28.365 { 00:06:28.365 "version": "SPDK v25.01-pre git sha1 eec618948", 00:06:28.365 "fields": { 00:06:28.365 "major": 25, 00:06:28.365 "minor": 1, 00:06:28.365 "patch": 0, 00:06:28.365 "suffix": "-pre", 00:06:28.365 "commit": "eec618948" 00:06:28.365 } 00:06:28.365 } 00:06:28.365 09:43:53 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:28.365 09:43:53 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:28.365 09:43:53 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:28.365 09:43:53 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:28.365 09:43:53 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:28.365 09:43:53 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:28.365 09:43:53 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.365 09:43:53 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:28.365 09:43:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:28.365 09:43:53 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.624 09:43:53 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:28.625 09:43:53 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:28.625 09:43:53 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.625 09:43:53 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:28.625 09:43:53 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.625 09:43:53 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.625 09:43:53 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.625 09:43:53 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.625 09:43:53 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.625 09:43:53 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.625 09:43:53 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.625 09:43:53 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.625 09:43:53 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:28.625 09:43:53 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.625 request: 00:06:28.625 { 00:06:28.625 "method": "env_dpdk_get_mem_stats", 00:06:28.625 "req_id": 1 00:06:28.625 } 00:06:28.625 Got JSON-RPC error response 00:06:28.625 response: 00:06:28.625 { 00:06:28.625 "code": -32601, 00:06:28.625 "message": "Method not found" 00:06:28.625 } 00:06:28.625 09:43:53 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:28.625 09:43:53 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:28.625 09:43:53 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:28.625 09:43:53 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:28.625 09:43:53 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59833 00:06:28.625 09:43:53 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59833 ']' 00:06:28.625 09:43:53 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59833 00:06:28.625 09:43:53 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:28.884 09:43:53 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.884 09:43:53 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59833 00:06:28.884 killing process with pid 59833 00:06:28.884 09:43:53 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.884 09:43:53 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.884 09:43:53 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59833' 00:06:28.884 09:43:53 app_cmdline -- common/autotest_common.sh@973 -- # kill 59833 00:06:28.884 09:43:53 app_cmdline -- common/autotest_common.sh@978 -- # wait 59833 00:06:31.421 ************************************ 00:06:31.421 END TEST app_cmdline 00:06:31.421 ************************************ 00:06:31.421 00:06:31.421 real 0m4.348s 00:06:31.421 user 0m4.569s 00:06:31.421 sys 0m0.583s 00:06:31.421 09:43:56 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.421 09:43:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:31.421 09:43:56 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:31.421 09:43:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.421 09:43:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.421 09:43:56 -- common/autotest_common.sh@10 -- # set +x 00:06:31.421 ************************************ 00:06:31.421 START TEST version 00:06:31.421 ************************************ 00:06:31.421 09:43:56 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:31.421 * Looking for test storage... 00:06:31.421 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:31.421 09:43:56 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:31.421 09:43:56 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:31.421 09:43:56 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:31.421 09:43:56 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:31.421 09:43:56 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.421 09:43:56 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.421 09:43:56 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.421 09:43:56 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.421 09:43:56 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.421 09:43:56 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.421 09:43:56 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.421 09:43:56 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.421 09:43:56 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.421 09:43:56 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.421 09:43:56 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.421 09:43:56 version -- scripts/common.sh@344 -- # case "$op" in 00:06:31.421 09:43:56 version -- scripts/common.sh@345 -- # : 1 00:06:31.421 09:43:56 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.421 09:43:56 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.421 09:43:56 version -- scripts/common.sh@365 -- # decimal 1 00:06:31.421 09:43:56 version -- scripts/common.sh@353 -- # local d=1 00:06:31.421 09:43:56 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.421 09:43:56 version -- scripts/common.sh@355 -- # echo 1 00:06:31.421 09:43:56 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.421 09:43:56 version -- scripts/common.sh@366 -- # decimal 2 00:06:31.421 09:43:56 version -- scripts/common.sh@353 -- # local d=2 00:06:31.421 09:43:56 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.421 09:43:56 version -- scripts/common.sh@355 -- # echo 2 00:06:31.421 09:43:56 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.421 09:43:56 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.421 09:43:56 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.421 09:43:56 version -- scripts/common.sh@368 -- # return 0 00:06:31.421 09:43:56 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.421 09:43:56 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:31.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.421 --rc genhtml_branch_coverage=1 00:06:31.421 --rc genhtml_function_coverage=1 00:06:31.421 --rc genhtml_legend=1 00:06:31.422 --rc geninfo_all_blocks=1 00:06:31.422 --rc geninfo_unexecuted_blocks=1 00:06:31.422 00:06:31.422 ' 00:06:31.422 09:43:56 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:31.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.422 --rc genhtml_branch_coverage=1 00:06:31.422 --rc genhtml_function_coverage=1 00:06:31.422 --rc genhtml_legend=1 00:06:31.422 --rc geninfo_all_blocks=1 00:06:31.422 --rc geninfo_unexecuted_blocks=1 00:06:31.422 00:06:31.422 ' 00:06:31.422 09:43:56 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:31.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.422 --rc genhtml_branch_coverage=1 00:06:31.422 --rc genhtml_function_coverage=1 00:06:31.422 --rc genhtml_legend=1 00:06:31.422 --rc geninfo_all_blocks=1 00:06:31.422 --rc geninfo_unexecuted_blocks=1 00:06:31.422 00:06:31.422 ' 00:06:31.422 09:43:56 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:31.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.422 --rc genhtml_branch_coverage=1 00:06:31.422 --rc genhtml_function_coverage=1 00:06:31.422 --rc genhtml_legend=1 00:06:31.422 --rc geninfo_all_blocks=1 00:06:31.422 --rc geninfo_unexecuted_blocks=1 00:06:31.422 00:06:31.422 ' 00:06:31.422 09:43:56 version -- app/version.sh@17 -- # get_header_version major 00:06:31.422 09:43:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.422 09:43:56 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.422 09:43:56 version -- app/version.sh@14 -- # cut -f2 00:06:31.422 09:43:56 version -- app/version.sh@17 -- # major=25 00:06:31.422 09:43:56 version -- app/version.sh@18 -- # get_header_version minor 00:06:31.422 09:43:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.422 09:43:56 version -- app/version.sh@14 -- # cut -f2 00:06:31.422 09:43:56 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.422 09:43:56 version -- app/version.sh@18 -- # minor=1 00:06:31.422 09:43:56 version -- app/version.sh@19 -- # get_header_version patch 00:06:31.422 09:43:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.422 09:43:56 version -- app/version.sh@14 -- # cut -f2 00:06:31.422 09:43:56 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.422 09:43:56 version -- app/version.sh@19 -- # patch=0 00:06:31.422 09:43:56 version -- app/version.sh@20 -- # get_header_version suffix 00:06:31.422 09:43:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.422 09:43:56 version -- app/version.sh@14 -- # cut -f2 00:06:31.422 09:43:56 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.422 09:43:56 version -- app/version.sh@20 -- # suffix=-pre 00:06:31.422 09:43:56 version -- app/version.sh@22 -- # version=25.1 00:06:31.422 09:43:56 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:31.422 09:43:56 version -- app/version.sh@28 -- # version=25.1rc0 00:06:31.422 09:43:56 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:31.422 09:43:56 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:31.682 09:43:56 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:31.682 09:43:56 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:31.682 ************************************ 00:06:31.682 END TEST version 00:06:31.682 ************************************ 00:06:31.682 00:06:31.682 real 0m0.324s 00:06:31.682 user 0m0.197s 00:06:31.682 sys 0m0.180s 00:06:31.682 09:43:56 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.682 09:43:56 version -- common/autotest_common.sh@10 -- # set +x 00:06:31.682 09:43:56 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:31.682 09:43:56 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:31.682 09:43:56 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:31.682 09:43:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.682 09:43:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.682 09:43:56 -- common/autotest_common.sh@10 -- # set +x 00:06:31.682 ************************************ 00:06:31.682 START TEST bdev_raid 00:06:31.682 ************************************ 00:06:31.682 09:43:56 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:31.682 * Looking for test storage... 00:06:31.682 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:31.682 09:43:56 bdev_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:31.682 09:43:56 bdev_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:31.682 09:43:56 bdev_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:06:31.942 09:43:56 bdev_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:31.942 09:43:56 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.942 09:43:56 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.942 09:43:56 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.942 09:43:56 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.942 09:43:56 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.942 09:43:56 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.942 09:43:56 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.942 09:43:56 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.942 09:43:56 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.942 09:43:56 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.942 09:43:56 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.942 09:43:56 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:31.942 09:43:56 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:31.942 09:43:56 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.942 09:43:56 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.942 09:43:56 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:31.942 09:43:56 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:31.942 09:43:56 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.942 09:43:56 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:31.942 09:43:56 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.942 09:43:56 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:31.942 09:43:56 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:31.942 09:43:56 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.942 09:43:56 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:31.942 09:43:56 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.942 09:43:56 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.942 09:43:56 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.942 09:43:56 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:31.942 09:43:56 bdev_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.942 09:43:56 bdev_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:31.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.942 --rc genhtml_branch_coverage=1 00:06:31.942 --rc genhtml_function_coverage=1 00:06:31.942 --rc genhtml_legend=1 00:06:31.942 --rc geninfo_all_blocks=1 00:06:31.942 --rc geninfo_unexecuted_blocks=1 00:06:31.942 00:06:31.942 ' 00:06:31.942 09:43:56 bdev_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:31.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.942 --rc genhtml_branch_coverage=1 00:06:31.942 --rc genhtml_function_coverage=1 00:06:31.942 --rc genhtml_legend=1 00:06:31.942 --rc geninfo_all_blocks=1 00:06:31.942 --rc geninfo_unexecuted_blocks=1 00:06:31.942 00:06:31.942 ' 00:06:31.942 09:43:56 bdev_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:31.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.942 --rc genhtml_branch_coverage=1 00:06:31.942 --rc genhtml_function_coverage=1 00:06:31.942 --rc genhtml_legend=1 00:06:31.942 --rc geninfo_all_blocks=1 00:06:31.942 --rc geninfo_unexecuted_blocks=1 00:06:31.942 00:06:31.942 ' 00:06:31.942 09:43:56 bdev_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:31.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.942 --rc genhtml_branch_coverage=1 00:06:31.942 --rc genhtml_function_coverage=1 00:06:31.942 --rc genhtml_legend=1 00:06:31.942 --rc geninfo_all_blocks=1 00:06:31.942 --rc geninfo_unexecuted_blocks=1 00:06:31.942 00:06:31.942 ' 00:06:31.942 09:43:56 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:31.942 09:43:56 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:31.942 09:43:56 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:31.942 09:43:56 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:31.942 09:43:56 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:31.942 09:43:56 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:31.942 09:43:56 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:31.942 09:43:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.942 09:43:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.942 09:43:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:31.942 ************************************ 00:06:31.942 START TEST raid1_resize_data_offset_test 00:06:31.942 ************************************ 00:06:31.942 09:43:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:06:31.942 09:43:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60026 00:06:31.942 Process raid pid: 60026 00:06:31.942 09:43:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:31.942 09:43:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60026' 00:06:31.942 09:43:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60026 00:06:31.942 09:43:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60026 ']' 00:06:31.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.942 09:43:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.942 09:43:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.943 09:43:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.943 09:43:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.943 09:43:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.943 [2024-12-06 09:43:57.101228] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:31.943 [2024-12-06 09:43:57.101919] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:32.203 [2024-12-06 09:43:57.264251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.203 [2024-12-06 09:43:57.377581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.463 [2024-12-06 09:43:57.572297] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:32.463 [2024-12-06 09:43:57.572425] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:32.723 09:43:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.723 09:43:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:06:32.723 09:43:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:32.723 09:43:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.723 09:43:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.983 malloc0 00:06:32.983 09:43:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.983 09:43:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:32.983 09:43:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.983 09:43:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.983 malloc1 00:06:32.983 09:43:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.983 09:43:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:32.983 09:43:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.983 09:43:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.983 null0 00:06:32.983 09:43:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.983 09:43:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:32.983 09:43:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.983 09:43:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.983 [2024-12-06 09:43:58.099105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:32.983 [2024-12-06 09:43:58.101037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:32.983 [2024-12-06 09:43:58.101139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:32.983 [2024-12-06 09:43:58.101391] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:32.983 [2024-12-06 09:43:58.101449] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:32.983 [2024-12-06 09:43:58.101774] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:32.983 [2024-12-06 09:43:58.102003] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:32.983 [2024-12-06 09:43:58.102023] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:32.983 [2024-12-06 09:43:58.102210] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:32.983 09:43:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.983 09:43:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:32.983 09:43:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.983 09:43:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:32.983 09:43:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.983 09:43:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.983 09:43:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:32.983 09:43:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:32.983 09:43:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.983 09:43:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.983 [2024-12-06 09:43:58.159016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:32.983 09:43:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.983 09:43:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:32.983 09:43:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.983 09:43:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.555 malloc2 00:06:33.555 09:43:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.555 09:43:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:33.555 09:43:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.555 09:43:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.555 [2024-12-06 09:43:58.675693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:33.555 [2024-12-06 09:43:58.691729] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:33.555 09:43:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.555 [2024-12-06 09:43:58.693667] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:33.555 09:43:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:33.555 09:43:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:33.555 09:43:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.555 09:43:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.555 09:43:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.555 09:43:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:33.555 09:43:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60026 00:06:33.555 09:43:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60026 ']' 00:06:33.555 09:43:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60026 00:06:33.555 09:43:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:06:33.555 09:43:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.555 09:43:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60026 00:06:33.555 killing process with pid 60026 00:06:33.555 09:43:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.555 09:43:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.555 09:43:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60026' 00:06:33.555 09:43:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60026 00:06:33.555 [2024-12-06 09:43:58.785664] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:33.555 09:43:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60026 00:06:33.555 [2024-12-06 09:43:58.787359] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:33.555 [2024-12-06 09:43:58.787544] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:33.555 [2024-12-06 09:43:58.787567] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:33.555 [2024-12-06 09:43:58.823002] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:33.555 [2024-12-06 09:43:58.823373] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:33.555 [2024-12-06 09:43:58.823396] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:35.465 [2024-12-06 09:44:00.610502] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:36.845 ************************************ 00:06:36.845 END TEST raid1_resize_data_offset_test 00:06:36.845 ************************************ 00:06:36.845 09:44:01 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:36.845 00:06:36.845 real 0m4.741s 00:06:36.845 user 0m4.645s 00:06:36.845 sys 0m0.535s 00:06:36.845 09:44:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.845 09:44:01 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.845 09:44:01 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:36.845 09:44:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:36.845 09:44:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.845 09:44:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:36.845 ************************************ 00:06:36.845 START TEST raid0_resize_superblock_test 00:06:36.845 ************************************ 00:06:36.846 09:44:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:06:36.846 09:44:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:36.846 Process raid pid: 60110 00:06:36.846 09:44:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60110 00:06:36.846 09:44:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60110' 00:06:36.846 09:44:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:36.846 09:44:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60110 00:06:36.846 09:44:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60110 ']' 00:06:36.846 09:44:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.846 09:44:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.846 09:44:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.846 09:44:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.846 09:44:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.846 [2024-12-06 09:44:01.896626] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:36.846 [2024-12-06 09:44:01.896742] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:36.846 [2024-12-06 09:44:02.067420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.105 [2024-12-06 09:44:02.184885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.364 [2024-12-06 09:44:02.382638] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:37.364 [2024-12-06 09:44:02.382681] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:37.623 09:44:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.623 09:44:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:37.623 09:44:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:37.623 09:44:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.624 09:44:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.193 malloc0 00:06:38.193 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.193 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:38.193 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.193 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.193 [2024-12-06 09:44:03.261310] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:38.193 [2024-12-06 09:44:03.261372] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:38.193 [2024-12-06 09:44:03.261395] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:38.193 [2024-12-06 09:44:03.261406] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:38.193 [2024-12-06 09:44:03.263650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:38.193 [2024-12-06 09:44:03.263693] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:38.193 pt0 00:06:38.193 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.193 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:38.193 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.193 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.193 8563bd44-f12c-44c8-a648-bc337f57a631 00:06:38.193 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.193 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:38.193 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.193 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.193 86ce0cad-54ed-4c35-8da5-39e611660e2e 00:06:38.193 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.193 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:38.193 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.193 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.193 d513a35a-a6dc-458f-a840-ed113df8a5ed 00:06:38.193 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.193 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:38.193 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:38.193 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.193 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.193 [2024-12-06 09:44:03.394972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 86ce0cad-54ed-4c35-8da5-39e611660e2e is claimed 00:06:38.193 [2024-12-06 09:44:03.395065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev d513a35a-a6dc-458f-a840-ed113df8a5ed is claimed 00:06:38.193 [2024-12-06 09:44:03.395279] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:38.193 [2024-12-06 09:44:03.395317] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:38.193 [2024-12-06 09:44:03.395617] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:38.193 [2024-12-06 09:44:03.395858] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:38.194 [2024-12-06 09:44:03.395908] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:38.194 [2024-12-06 09:44:03.396107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:38.194 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.194 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:38.194 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.194 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:38.194 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.194 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.194 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:38.194 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:38.194 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:38.194 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.194 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.454 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.454 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:38.454 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:38.454 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:38.454 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:38.454 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:38.454 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.454 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.454 [2024-12-06 09:44:03.487050] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:38.454 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.454 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:38.454 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:38.454 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:38.454 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:38.454 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.454 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.454 [2024-12-06 09:44:03.534956] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:38.454 [2024-12-06 09:44:03.534985] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '86ce0cad-54ed-4c35-8da5-39e611660e2e' was resized: old size 131072, new size 204800 00:06:38.454 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.454 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:38.454 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.454 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.454 [2024-12-06 09:44:03.546917] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:38.454 [2024-12-06 09:44:03.546948] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'd513a35a-a6dc-458f-a840-ed113df8a5ed' was resized: old size 131072, new size 204800 00:06:38.454 [2024-12-06 09:44:03.546982] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:38.454 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.454 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:38.454 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.454 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.454 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:38.454 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.454 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:38.454 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:38.454 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:38.454 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.454 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.454 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.454 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:38.454 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:38.454 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:38.454 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:38.455 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:38.455 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.455 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.455 [2024-12-06 09:44:03.658728] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:38.455 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.455 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:38.455 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:38.455 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:38.455 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:38.455 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.455 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.455 [2024-12-06 09:44:03.702449] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:38.455 [2024-12-06 09:44:03.702576] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:38.455 [2024-12-06 09:44:03.702596] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:38.455 [2024-12-06 09:44:03.702610] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:38.455 [2024-12-06 09:44:03.702730] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:38.455 [2024-12-06 09:44:03.702765] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:38.455 [2024-12-06 09:44:03.702777] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:38.455 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.455 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:38.455 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.455 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.455 [2024-12-06 09:44:03.714370] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:38.455 [2024-12-06 09:44:03.714416] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:38.455 [2024-12-06 09:44:03.714451] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:38.455 [2024-12-06 09:44:03.714461] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:38.455 [2024-12-06 09:44:03.716605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:38.455 [2024-12-06 09:44:03.716644] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:38.455 [2024-12-06 09:44:03.718204] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 86ce0cad-54ed-4c35-8da5-39e611660e2e 00:06:38.455 [2024-12-06 09:44:03.718263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 86ce0cad-54ed-4c35-8da5-39e611660e2e is claimed 00:06:38.455 [2024-12-06 09:44:03.718364] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev d513a35a-a6dc-458f-a840-ed113df8a5ed 00:06:38.455 [2024-12-06 09:44:03.718381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev d513a35a-a6dc-458f-a840-ed113df8a5ed is claimed 00:06:38.455 [2024-12-06 09:44:03.718501] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev d513a35a-a6dc-458f-a840-ed113df8a5ed (2) smaller than existing raid bdev Raid (3) 00:06:38.455 [2024-12-06 09:44:03.718522] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 86ce0cad-54ed-4c35-8da5-39e611660e2e: File exists 00:06:38.455 [2024-12-06 09:44:03.718561] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:38.455 [2024-12-06 09:44:03.718572] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:38.455 [2024-12-06 09:44:03.718813] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:38.455 [2024-12-06 09:44:03.718961] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:38.455 [2024-12-06 09:44:03.718976] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:38.455 [2024-12-06 09:44:03.719179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:38.455 pt0 00:06:38.455 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.455 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:38.455 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.455 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.718 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.718 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:38.718 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:38.718 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:38.718 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:38.718 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.718 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.718 [2024-12-06 09:44:03.742726] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:38.718 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.718 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:38.718 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:38.718 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:38.718 09:44:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60110 00:06:38.718 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60110 ']' 00:06:38.718 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60110 00:06:38.718 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:38.718 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.718 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60110 00:06:38.718 killing process with pid 60110 00:06:38.718 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.718 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.718 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60110' 00:06:38.718 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60110 00:06:38.718 [2024-12-06 09:44:03.818850] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:38.718 [2024-12-06 09:44:03.818914] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:38.718 [2024-12-06 09:44:03.818955] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:38.718 [2024-12-06 09:44:03.818963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:38.718 09:44:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60110 00:06:40.098 [2024-12-06 09:44:05.234063] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:41.476 09:44:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:41.476 00:06:41.476 real 0m4.541s 00:06:41.476 user 0m4.768s 00:06:41.476 sys 0m0.534s 00:06:41.476 09:44:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.476 09:44:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.476 ************************************ 00:06:41.476 END TEST raid0_resize_superblock_test 00:06:41.476 ************************************ 00:06:41.476 09:44:06 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:41.476 09:44:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:41.476 09:44:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.476 09:44:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:41.476 ************************************ 00:06:41.476 START TEST raid1_resize_superblock_test 00:06:41.476 ************************************ 00:06:41.476 09:44:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:06:41.476 09:44:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:41.476 09:44:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60204 00:06:41.476 09:44:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:41.476 Process raid pid: 60204 00:06:41.476 09:44:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60204' 00:06:41.476 09:44:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60204 00:06:41.476 09:44:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60204 ']' 00:06:41.476 09:44:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.476 09:44:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.476 09:44:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.476 09:44:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.476 09:44:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.476 [2024-12-06 09:44:06.505896] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:41.476 [2024-12-06 09:44:06.506085] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:41.476 [2024-12-06 09:44:06.677989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.734 [2024-12-06 09:44:06.787892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.734 [2024-12-06 09:44:06.995906] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:41.734 [2024-12-06 09:44:06.995994] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:42.304 09:44:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.304 09:44:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:42.304 09:44:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:42.304 09:44:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.304 09:44:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.873 malloc0 00:06:42.873 09:44:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.873 09:44:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:42.873 09:44:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.873 09:44:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.873 [2024-12-06 09:44:07.872829] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:42.873 [2024-12-06 09:44:07.872889] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:42.873 [2024-12-06 09:44:07.872928] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:42.873 [2024-12-06 09:44:07.872940] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:42.873 [2024-12-06 09:44:07.875037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:42.873 [2024-12-06 09:44:07.875076] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:42.873 pt0 00:06:42.873 09:44:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.873 09:44:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:42.873 09:44:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.873 09:44:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.873 c01f3d7e-2901-4d58-9c2a-44b40c5cae2f 00:06:42.873 09:44:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.873 09:44:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:42.873 09:44:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.873 09:44:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.873 ec6097dc-2d72-4da6-9b9e-74158c81bad3 00:06:42.873 09:44:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.873 09:44:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:42.873 09:44:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.873 09:44:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.873 45acfdca-4de0-44e9-980a-6b8240fd4d0d 00:06:42.873 09:44:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.873 09:44:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:42.873 09:44:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:42.873 09:44:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.873 09:44:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.873 [2024-12-06 09:44:08.005792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ec6097dc-2d72-4da6-9b9e-74158c81bad3 is claimed 00:06:42.873 [2024-12-06 09:44:08.005938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 45acfdca-4de0-44e9-980a-6b8240fd4d0d is claimed 00:06:42.873 [2024-12-06 09:44:08.006095] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:42.873 [2024-12-06 09:44:08.006113] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:42.873 [2024-12-06 09:44:08.006422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:42.873 [2024-12-06 09:44:08.006649] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:42.873 [2024-12-06 09:44:08.006661] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:42.873 [2024-12-06 09:44:08.006819] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:42.873 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.873 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:42.873 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:42.873 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.873 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.873 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.873 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:42.873 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:42.873 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.873 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.873 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:42.873 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.873 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:42.873 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:42.873 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:42.873 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:42.873 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:42.873 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.873 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.873 [2024-12-06 09:44:08.117888] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:42.873 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.135 [2024-12-06 09:44:08.165740] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:43.135 [2024-12-06 09:44:08.165814] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'ec6097dc-2d72-4da6-9b9e-74158c81bad3' was resized: old size 131072, new size 204800 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.135 [2024-12-06 09:44:08.177633] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:43.135 [2024-12-06 09:44:08.177698] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '45acfdca-4de0-44e9-980a-6b8240fd4d0d' was resized: old size 131072, new size 204800 00:06:43.135 [2024-12-06 09:44:08.177764] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:43.135 [2024-12-06 09:44:08.285538] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.135 [2024-12-06 09:44:08.333274] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:43.135 [2024-12-06 09:44:08.333348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:43.135 [2024-12-06 09:44:08.333373] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:43.135 [2024-12-06 09:44:08.333516] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:43.135 [2024-12-06 09:44:08.333711] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:43.135 [2024-12-06 09:44:08.333776] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:43.135 [2024-12-06 09:44:08.333789] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.135 [2024-12-06 09:44:08.341203] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:43.135 [2024-12-06 09:44:08.341250] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:43.135 [2024-12-06 09:44:08.341285] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:43.135 [2024-12-06 09:44:08.341298] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:43.135 [2024-12-06 09:44:08.343431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:43.135 [2024-12-06 09:44:08.343478] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:43.135 [2024-12-06 09:44:08.345107] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev ec6097dc-2d72-4da6-9b9e-74158c81bad3 00:06:43.135 [2024-12-06 09:44:08.345253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ec6097dc-2d72-4da6-9b9e-74158c81bad3 is claimed 00:06:43.135 [2024-12-06 09:44:08.345384] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 45acfdca-4de0-44e9-980a-6b8240fd4d0d 00:06:43.135 [2024-12-06 09:44:08.345404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 45acfdca-4de0-44e9-980a-6b8240fd4d0d is claimed 00:06:43.135 [2024-12-06 09:44:08.345526] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 45acfdca-4de0-44e9-980a-6b8240fd4d0d (2) smaller than existing raid bdev Raid (3) 00:06:43.135 [2024-12-06 09:44:08.345547] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev ec6097dc-2d72-4da6-9b9e-74158c81bad3: File exists 00:06:43.135 [2024-12-06 09:44:08.345588] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:43.135 [2024-12-06 09:44:08.345599] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:43.135 [2024-12-06 09:44:08.345842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:43.135 [2024-12-06 09:44:08.345999] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:43.135 [2024-12-06 09:44:08.346014] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:43.135 [2024-12-06 09:44:08.346211] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:43.135 pt0 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:43.135 [2024-12-06 09:44:08.361769] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60204 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60204 ']' 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60204 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:43.135 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.395 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60204 00:06:43.395 killing process with pid 60204 00:06:43.395 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.395 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.395 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60204' 00:06:43.395 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60204 00:06:43.395 [2024-12-06 09:44:08.438398] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:43.395 [2024-12-06 09:44:08.438459] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:43.395 [2024-12-06 09:44:08.438504] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:43.395 [2024-12-06 09:44:08.438513] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:43.395 09:44:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60204 00:06:44.772 [2024-12-06 09:44:09.854605] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:46.162 ************************************ 00:06:46.162 END TEST raid1_resize_superblock_test 00:06:46.162 ************************************ 00:06:46.162 09:44:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:46.162 00:06:46.162 real 0m4.582s 00:06:46.162 user 0m4.821s 00:06:46.162 sys 0m0.550s 00:06:46.162 09:44:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.162 09:44:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.162 09:44:11 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:46.162 09:44:11 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:46.162 09:44:11 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:46.162 09:44:11 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:46.162 09:44:11 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:46.162 09:44:11 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:46.162 09:44:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:46.162 09:44:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.162 09:44:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:46.162 ************************************ 00:06:46.162 START TEST raid_function_test_raid0 00:06:46.162 ************************************ 00:06:46.162 09:44:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:06:46.162 09:44:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:46.162 09:44:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:46.162 09:44:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:46.162 Process raid pid: 60311 00:06:46.162 09:44:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60311 00:06:46.162 09:44:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:46.162 09:44:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60311' 00:06:46.162 09:44:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60311 00:06:46.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.162 09:44:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60311 ']' 00:06:46.162 09:44:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.162 09:44:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.162 09:44:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.162 09:44:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.162 09:44:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:46.162 [2024-12-06 09:44:11.182912] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:46.162 [2024-12-06 09:44:11.183137] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:46.162 [2024-12-06 09:44:11.360562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.421 [2024-12-06 09:44:11.480172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.421 [2024-12-06 09:44:11.679592] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:46.421 [2024-12-06 09:44:11.679720] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:46.989 09:44:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.989 09:44:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:06:46.989 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:46.989 09:44:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.989 09:44:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:46.989 Base_1 00:06:46.989 09:44:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.989 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:46.989 09:44:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.989 09:44:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:46.989 Base_2 00:06:46.989 09:44:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.989 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:46.989 09:44:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.989 09:44:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:46.989 [2024-12-06 09:44:12.118654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:46.989 [2024-12-06 09:44:12.120729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:46.989 [2024-12-06 09:44:12.120798] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:46.989 [2024-12-06 09:44:12.120810] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:46.989 [2024-12-06 09:44:12.121064] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:46.989 [2024-12-06 09:44:12.121225] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:46.989 [2024-12-06 09:44:12.121236] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:46.989 [2024-12-06 09:44:12.121392] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:46.989 09:44:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.989 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:46.990 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:46.990 09:44:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.990 09:44:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:46.990 09:44:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.990 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:46.990 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:46.990 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:46.990 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:46.990 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:46.990 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:46.990 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:46.990 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:46.990 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:46.990 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:46.990 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:46.990 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:47.249 [2024-12-06 09:44:12.366358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:47.249 /dev/nbd0 00:06:47.249 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:47.249 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:47.249 09:44:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:47.249 09:44:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:06:47.249 09:44:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:47.249 09:44:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:47.249 09:44:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:47.249 09:44:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:06:47.249 09:44:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:47.249 09:44:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:47.249 09:44:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:47.249 1+0 records in 00:06:47.249 1+0 records out 00:06:47.249 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000203842 s, 20.1 MB/s 00:06:47.249 09:44:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.249 09:44:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:06:47.249 09:44:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.249 09:44:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:47.249 09:44:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:06:47.249 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.249 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:47.249 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:47.249 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:47.249 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:47.508 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:47.508 { 00:06:47.508 "nbd_device": "/dev/nbd0", 00:06:47.508 "bdev_name": "raid" 00:06:47.508 } 00:06:47.508 ]' 00:06:47.508 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:47.508 { 00:06:47.508 "nbd_device": "/dev/nbd0", 00:06:47.508 "bdev_name": "raid" 00:06:47.508 } 00:06:47.508 ]' 00:06:47.508 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.508 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:47.508 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.508 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:47.508 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:47.508 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:47.508 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:47.508 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:47.508 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:47.508 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:47.508 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:47.508 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:47.508 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:47.508 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:47.508 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:47.508 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:47.508 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:47.508 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:47.509 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:47.509 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:47.509 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:47.509 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:47.509 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:47.509 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:47.509 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:47.509 4096+0 records in 00:06:47.509 4096+0 records out 00:06:47.509 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0345308 s, 60.7 MB/s 00:06:47.509 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:47.769 4096+0 records in 00:06:47.769 4096+0 records out 00:06:47.769 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.20894 s, 10.0 MB/s 00:06:47.769 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:47.769 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:47.769 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:47.769 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:47.769 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:47.769 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:47.769 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:47.769 128+0 records in 00:06:47.769 128+0 records out 00:06:47.769 65536 bytes (66 kB, 64 KiB) copied, 0.00120421 s, 54.4 MB/s 00:06:47.769 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:47.769 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:47.769 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:47.769 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:47.769 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:47.769 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:47.769 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:47.769 09:44:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:47.769 2035+0 records in 00:06:47.769 2035+0 records out 00:06:47.769 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0152463 s, 68.3 MB/s 00:06:47.769 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:47.769 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:47.769 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:47.769 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:47.769 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:47.769 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:47.769 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:47.769 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:47.769 456+0 records in 00:06:47.769 456+0 records out 00:06:47.769 233472 bytes (233 kB, 228 KiB) copied, 0.00402328 s, 58.0 MB/s 00:06:48.029 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:48.029 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:48.029 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:48.029 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:48.029 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:48.029 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:48.029 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:48.029 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:48.029 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:48.029 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:48.029 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:48.029 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.029 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:48.029 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:48.029 [2024-12-06 09:44:13.287831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:48.029 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:48.029 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:48.029 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.029 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.029 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:48.029 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:48.029 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.029 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:48.029 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:48.289 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:48.289 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:48.289 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:48.289 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:48.289 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:48.548 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:48.549 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:48.549 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:48.549 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:48.549 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:48.549 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:48.549 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:48.549 09:44:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60311 00:06:48.549 09:44:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60311 ']' 00:06:48.549 09:44:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60311 00:06:48.549 09:44:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:06:48.549 09:44:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.549 09:44:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60311 00:06:48.549 killing process with pid 60311 00:06:48.549 09:44:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.549 09:44:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.549 09:44:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60311' 00:06:48.549 09:44:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60311 00:06:48.549 [2024-12-06 09:44:13.613868] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:48.549 [2024-12-06 09:44:13.613965] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:48.549 [2024-12-06 09:44:13.614011] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:48.549 [2024-12-06 09:44:13.614026] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:48.549 09:44:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60311 00:06:48.807 [2024-12-06 09:44:13.820790] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:49.743 ************************************ 00:06:49.743 END TEST raid_function_test_raid0 00:06:49.743 ************************************ 00:06:49.743 09:44:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:49.743 00:06:49.743 real 0m3.862s 00:06:49.743 user 0m4.522s 00:06:49.743 sys 0m0.909s 00:06:49.743 09:44:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.743 09:44:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:49.743 09:44:14 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:49.743 09:44:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:49.743 09:44:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.743 09:44:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:49.743 ************************************ 00:06:49.743 START TEST raid_function_test_concat 00:06:49.743 ************************************ 00:06:49.743 09:44:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:06:49.743 09:44:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:49.743 09:44:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:49.743 09:44:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:49.743 09:44:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60433 00:06:49.743 Process raid pid: 60433 00:06:49.743 09:44:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:49.743 09:44:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60433' 00:06:49.743 09:44:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60433 00:06:49.743 09:44:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60433 ']' 00:06:49.743 09:44:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.743 09:44:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.743 09:44:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.743 09:44:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.001 09:44:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:50.001 [2024-12-06 09:44:15.136056] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:50.001 [2024-12-06 09:44:15.136699] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:50.260 [2024-12-06 09:44:15.333880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.260 [2024-12-06 09:44:15.455021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.519 [2024-12-06 09:44:15.660911] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:50.519 [2024-12-06 09:44:15.661054] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:50.777 09:44:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.777 09:44:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:06:50.777 09:44:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:50.777 09:44:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.777 09:44:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:50.777 Base_1 00:06:50.777 09:44:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.777 09:44:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:50.777 09:44:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.777 09:44:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:50.777 Base_2 00:06:50.777 09:44:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.777 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:06:50.777 09:44:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.777 09:44:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:50.777 [2024-12-06 09:44:16.048314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:51.036 [2024-12-06 09:44:16.050357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:51.036 [2024-12-06 09:44:16.050489] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:51.036 [2024-12-06 09:44:16.050534] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:51.036 [2024-12-06 09:44:16.050842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:51.036 [2024-12-06 09:44:16.051056] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:51.036 [2024-12-06 09:44:16.051109] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:51.036 [2024-12-06 09:44:16.051309] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:51.036 09:44:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.036 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:51.036 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:51.036 09:44:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.036 09:44:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:51.036 09:44:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.036 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:51.036 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:51.036 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:51.036 09:44:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:51.036 09:44:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:51.036 09:44:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:51.036 09:44:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:51.036 09:44:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:51.036 09:44:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:06:51.036 09:44:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:51.036 09:44:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:51.036 09:44:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:51.036 [2024-12-06 09:44:16.291992] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:51.331 /dev/nbd0 00:06:51.331 09:44:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:51.331 09:44:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:51.331 09:44:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:51.331 09:44:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:06:51.331 09:44:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:51.331 09:44:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:51.331 09:44:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:51.331 09:44:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:06:51.331 09:44:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:51.331 09:44:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:51.331 09:44:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:51.331 1+0 records in 00:06:51.331 1+0 records out 00:06:51.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000597357 s, 6.9 MB/s 00:06:51.331 09:44:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.331 09:44:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:06:51.331 09:44:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:51.331 09:44:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:51.331 09:44:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:06:51.331 09:44:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.331 09:44:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:51.331 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:51.331 09:44:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:51.331 09:44:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:51.331 09:44:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:51.331 { 00:06:51.331 "nbd_device": "/dev/nbd0", 00:06:51.331 "bdev_name": "raid" 00:06:51.331 } 00:06:51.331 ]' 00:06:51.331 09:44:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:51.332 { 00:06:51.332 "nbd_device": "/dev/nbd0", 00:06:51.332 "bdev_name": "raid" 00:06:51.332 } 00:06:51.332 ]' 00:06:51.332 09:44:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.590 09:44:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:51.590 09:44:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:51.590 09:44:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.590 09:44:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:06:51.590 09:44:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:06:51.590 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:06:51.590 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:51.590 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:51.590 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:51.590 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:51.590 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:51.590 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:51.590 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:51.590 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:51.590 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:51.590 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:51.590 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:51.590 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:51.590 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:51.590 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:51.590 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:51.590 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:51.590 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:51.590 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:51.590 4096+0 records in 00:06:51.590 4096+0 records out 00:06:51.590 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0337912 s, 62.1 MB/s 00:06:51.590 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:51.849 4096+0 records in 00:06:51.849 4096+0 records out 00:06:51.849 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.204462 s, 10.3 MB/s 00:06:51.849 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:51.849 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:51.849 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:51.849 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:51.849 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:51.849 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:51.849 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:51.849 128+0 records in 00:06:51.849 128+0 records out 00:06:51.849 65536 bytes (66 kB, 64 KiB) copied, 0.00109089 s, 60.1 MB/s 00:06:51.849 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:51.849 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:51.849 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:51.849 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:51.849 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:51.849 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:51.849 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:51.849 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:51.849 2035+0 records in 00:06:51.849 2035+0 records out 00:06:51.849 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0125055 s, 83.3 MB/s 00:06:51.849 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:51.849 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:51.849 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:51.849 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:51.849 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:51.849 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:51.849 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:51.849 09:44:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:51.849 456+0 records in 00:06:51.849 456+0 records out 00:06:51.849 233472 bytes (233 kB, 228 KiB) copied, 0.00350774 s, 66.6 MB/s 00:06:51.849 09:44:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:51.849 09:44:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:51.849 09:44:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:51.849 09:44:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:51.849 09:44:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:51.849 09:44:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:06:51.849 09:44:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:51.849 09:44:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:51.849 09:44:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:51.849 09:44:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:51.849 09:44:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:06:51.849 09:44:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.849 09:44:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:52.107 09:44:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:52.107 [2024-12-06 09:44:17.294924] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:52.107 09:44:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:52.107 09:44:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:52.108 09:44:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.108 09:44:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.108 09:44:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:52.108 09:44:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:06:52.108 09:44:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.108 09:44:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:52.108 09:44:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:52.108 09:44:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:52.366 09:44:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:52.366 09:44:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:52.366 09:44:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.366 09:44:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:52.366 09:44:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.366 09:44:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:52.366 09:44:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:06:52.366 09:44:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:06:52.366 09:44:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:52.366 09:44:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:06:52.366 09:44:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:52.366 09:44:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60433 00:06:52.366 09:44:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60433 ']' 00:06:52.366 09:44:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60433 00:06:52.366 09:44:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:06:52.366 09:44:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.366 09:44:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60433 00:06:52.366 killing process with pid 60433 00:06:52.366 09:44:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.366 09:44:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.366 09:44:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60433' 00:06:52.366 09:44:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60433 00:06:52.366 [2024-12-06 09:44:17.612439] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:52.366 [2024-12-06 09:44:17.612551] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:52.366 09:44:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60433 00:06:52.366 [2024-12-06 09:44:17.612613] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:52.366 [2024-12-06 09:44:17.612627] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:52.624 [2024-12-06 09:44:17.823756] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:54.002 09:44:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:06:54.002 00:06:54.002 real 0m3.947s 00:06:54.002 user 0m4.646s 00:06:54.002 sys 0m0.953s 00:06:54.002 09:44:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.002 09:44:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:54.002 ************************************ 00:06:54.002 END TEST raid_function_test_concat 00:06:54.002 ************************************ 00:06:54.002 09:44:19 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:06:54.002 09:44:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:54.002 09:44:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.002 09:44:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:54.002 ************************************ 00:06:54.002 START TEST raid0_resize_test 00:06:54.002 ************************************ 00:06:54.002 09:44:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:06:54.002 09:44:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:06:54.003 09:44:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:54.003 09:44:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:54.003 09:44:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:54.003 09:44:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:54.003 09:44:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:54.003 09:44:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:54.003 09:44:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:54.003 09:44:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60557 00:06:54.003 09:44:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:54.003 09:44:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60557' 00:06:54.003 Process raid pid: 60557 00:06:54.003 09:44:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60557 00:06:54.003 09:44:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60557 ']' 00:06:54.003 09:44:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.003 09:44:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.003 09:44:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.003 09:44:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.003 09:44:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.003 [2024-12-06 09:44:19.107216] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:54.003 [2024-12-06 09:44:19.107346] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:54.262 [2024-12-06 09:44:19.281880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.262 [2024-12-06 09:44:19.397755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.521 [2024-12-06 09:44:19.600743] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:54.521 [2024-12-06 09:44:19.600881] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:54.781 09:44:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.781 09:44:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:54.781 09:44:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:54.781 09:44:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.781 09:44:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.781 Base_1 00:06:54.781 09:44:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.781 09:44:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:54.781 09:44:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.781 09:44:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.781 Base_2 00:06:54.781 09:44:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.781 09:44:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:06:54.781 09:44:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:54.781 09:44:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.781 09:44:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.781 [2024-12-06 09:44:19.979438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:54.781 [2024-12-06 09:44:19.981383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:54.781 [2024-12-06 09:44:19.981483] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:54.781 [2024-12-06 09:44:19.981521] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:54.781 [2024-12-06 09:44:19.981823] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:54.781 [2024-12-06 09:44:19.981983] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:54.781 [2024-12-06 09:44:19.982018] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:54.781 [2024-12-06 09:44:19.982235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:54.781 09:44:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.781 09:44:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:54.781 09:44:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.781 09:44:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.781 [2024-12-06 09:44:19.991372] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:54.781 [2024-12-06 09:44:19.991435] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:54.781 true 00:06:54.781 09:44:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.781 09:44:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:54.781 09:44:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:54.781 09:44:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.781 09:44:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.781 [2024-12-06 09:44:20.007518] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:54.781 09:44:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.781 09:44:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:06:54.781 09:44:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:06:54.781 09:44:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:06:54.781 09:44:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:06:54.781 09:44:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:06:54.781 09:44:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:54.781 09:44:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.781 09:44:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.041 [2024-12-06 09:44:20.055331] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:55.041 [2024-12-06 09:44:20.055420] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:55.041 [2024-12-06 09:44:20.055502] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:06:55.041 true 00:06:55.041 09:44:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.041 09:44:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:55.041 09:44:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:55.041 09:44:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.041 09:44:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.041 [2024-12-06 09:44:20.071456] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:55.041 09:44:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.041 09:44:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:06:55.041 09:44:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:06:55.041 09:44:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:06:55.041 09:44:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:06:55.041 09:44:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:06:55.041 09:44:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60557 00:06:55.041 09:44:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60557 ']' 00:06:55.041 09:44:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60557 00:06:55.041 09:44:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:55.041 09:44:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.042 09:44:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60557 00:06:55.042 09:44:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.042 killing process with pid 60557 00:06:55.042 09:44:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.042 09:44:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60557' 00:06:55.042 09:44:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60557 00:06:55.042 [2024-12-06 09:44:20.155389] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:55.042 [2024-12-06 09:44:20.155486] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:55.042 [2024-12-06 09:44:20.155548] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:55.042 [2024-12-06 09:44:20.155560] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:55.042 09:44:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60557 00:06:55.042 [2024-12-06 09:44:20.173953] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:56.423 09:44:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:56.423 00:06:56.423 real 0m2.265s 00:06:56.423 user 0m2.427s 00:06:56.423 sys 0m0.330s 00:06:56.423 09:44:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.423 09:44:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.423 ************************************ 00:06:56.423 END TEST raid0_resize_test 00:06:56.423 ************************************ 00:06:56.423 09:44:21 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:06:56.423 09:44:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:56.423 09:44:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.423 09:44:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:56.423 ************************************ 00:06:56.423 START TEST raid1_resize_test 00:06:56.423 ************************************ 00:06:56.423 09:44:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:06:56.423 09:44:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:06:56.423 09:44:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:56.423 09:44:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:56.423 09:44:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:56.423 09:44:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:56.423 09:44:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:56.423 09:44:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:56.423 09:44:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:56.423 09:44:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60613 00:06:56.423 09:44:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:56.423 09:44:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60613' 00:06:56.423 Process raid pid: 60613 00:06:56.423 09:44:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60613 00:06:56.423 09:44:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60613 ']' 00:06:56.423 09:44:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.423 09:44:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.423 09:44:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.423 09:44:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.423 09:44:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.423 [2024-12-06 09:44:21.438326] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:56.423 [2024-12-06 09:44:21.438439] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:56.423 [2024-12-06 09:44:21.614321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.683 [2024-12-06 09:44:21.728325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.683 [2024-12-06 09:44:21.933391] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.683 [2024-12-06 09:44:21.933434] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.254 Base_1 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.254 Base_2 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.254 [2024-12-06 09:44:22.316967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:57.254 [2024-12-06 09:44:22.318856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:57.254 [2024-12-06 09:44:22.318961] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:57.254 [2024-12-06 09:44:22.319021] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:06:57.254 [2024-12-06 09:44:22.319333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:57.254 [2024-12-06 09:44:22.319526] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:57.254 [2024-12-06 09:44:22.319568] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:57.254 [2024-12-06 09:44:22.319764] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.254 [2024-12-06 09:44:22.328925] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:57.254 [2024-12-06 09:44:22.328994] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:57.254 true 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:57.254 [2024-12-06 09:44:22.341071] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.254 [2024-12-06 09:44:22.392875] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:57.254 [2024-12-06 09:44:22.392967] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:57.254 [2024-12-06 09:44:22.393032] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:06:57.254 true 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.254 [2024-12-06 09:44:22.409036] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60613 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60613 ']' 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60613 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60613 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60613' 00:06:57.254 killing process with pid 60613 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60613 00:06:57.254 [2024-12-06 09:44:22.491958] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:57.254 09:44:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60613 00:06:57.254 [2024-12-06 09:44:22.492190] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:57.254 [2024-12-06 09:44:22.492892] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:57.254 [2024-12-06 09:44:22.493017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:57.254 [2024-12-06 09:44:22.511016] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:58.633 09:44:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:58.633 00:06:58.633 real 0m2.317s 00:06:58.633 user 0m2.485s 00:06:58.633 sys 0m0.319s 00:06:58.633 09:44:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.633 09:44:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.633 ************************************ 00:06:58.633 END TEST raid1_resize_test 00:06:58.633 ************************************ 00:06:58.633 09:44:23 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:06:58.633 09:44:23 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:58.633 09:44:23 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:06:58.633 09:44:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:58.634 09:44:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.634 09:44:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:58.634 ************************************ 00:06:58.634 START TEST raid_state_function_test 00:06:58.634 ************************************ 00:06:58.634 09:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:06:58.634 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:58.634 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:58.634 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:58.634 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:58.634 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:58.634 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:58.634 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:58.634 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:58.634 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:58.634 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:58.634 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:58.634 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:58.634 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:58.634 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:58.634 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:58.634 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:58.634 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:58.634 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:58.634 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:58.634 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:58.634 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:58.634 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:58.634 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:58.634 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60681 00:06:58.634 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60681' 00:06:58.634 Process raid pid: 60681 00:06:58.634 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60681 00:06:58.634 09:44:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:58.634 09:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60681 ']' 00:06:58.634 09:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.634 09:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.634 09:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.634 09:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.634 09:44:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.634 [2024-12-06 09:44:23.822217] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:06:58.634 [2024-12-06 09:44:23.822445] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:58.892 [2024-12-06 09:44:23.997821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.892 [2024-12-06 09:44:24.116851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.152 [2024-12-06 09:44:24.330039] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:59.152 [2024-12-06 09:44:24.330181] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:59.720 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.720 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:06:59.720 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:59.720 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.720 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.720 [2024-12-06 09:44:24.704771] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:59.720 [2024-12-06 09:44:24.704887] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:59.720 [2024-12-06 09:44:24.704921] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:59.720 [2024-12-06 09:44:24.704946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:59.720 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.720 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:59.720 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:59.720 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:59.720 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:59.720 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:59.720 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:59.720 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:59.720 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:59.720 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:59.720 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:59.720 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:59.720 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.720 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.721 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.721 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.721 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:59.721 "name": "Existed_Raid", 00:06:59.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.721 "strip_size_kb": 64, 00:06:59.721 "state": "configuring", 00:06:59.721 "raid_level": "raid0", 00:06:59.721 "superblock": false, 00:06:59.721 "num_base_bdevs": 2, 00:06:59.721 "num_base_bdevs_discovered": 0, 00:06:59.721 "num_base_bdevs_operational": 2, 00:06:59.721 "base_bdevs_list": [ 00:06:59.721 { 00:06:59.721 "name": "BaseBdev1", 00:06:59.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.721 "is_configured": false, 00:06:59.721 "data_offset": 0, 00:06:59.721 "data_size": 0 00:06:59.721 }, 00:06:59.721 { 00:06:59.721 "name": "BaseBdev2", 00:06:59.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.721 "is_configured": false, 00:06:59.721 "data_offset": 0, 00:06:59.721 "data_size": 0 00:06:59.721 } 00:06:59.721 ] 00:06:59.721 }' 00:06:59.721 09:44:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:59.721 09:44:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.981 [2024-12-06 09:44:25.108032] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:59.981 [2024-12-06 09:44:25.108117] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.981 [2024-12-06 09:44:25.119994] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:59.981 [2024-12-06 09:44:25.120076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:59.981 [2024-12-06 09:44:25.120106] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:59.981 [2024-12-06 09:44:25.120131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.981 [2024-12-06 09:44:25.168761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:59.981 BaseBdev1 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.981 [ 00:06:59.981 { 00:06:59.981 "name": "BaseBdev1", 00:06:59.981 "aliases": [ 00:06:59.981 "ac26fc77-692a-41a5-8d4a-e77b5511ef9a" 00:06:59.981 ], 00:06:59.981 "product_name": "Malloc disk", 00:06:59.981 "block_size": 512, 00:06:59.981 "num_blocks": 65536, 00:06:59.981 "uuid": "ac26fc77-692a-41a5-8d4a-e77b5511ef9a", 00:06:59.981 "assigned_rate_limits": { 00:06:59.981 "rw_ios_per_sec": 0, 00:06:59.981 "rw_mbytes_per_sec": 0, 00:06:59.981 "r_mbytes_per_sec": 0, 00:06:59.981 "w_mbytes_per_sec": 0 00:06:59.981 }, 00:06:59.981 "claimed": true, 00:06:59.981 "claim_type": "exclusive_write", 00:06:59.981 "zoned": false, 00:06:59.981 "supported_io_types": { 00:06:59.981 "read": true, 00:06:59.981 "write": true, 00:06:59.981 "unmap": true, 00:06:59.981 "flush": true, 00:06:59.981 "reset": true, 00:06:59.981 "nvme_admin": false, 00:06:59.981 "nvme_io": false, 00:06:59.981 "nvme_io_md": false, 00:06:59.981 "write_zeroes": true, 00:06:59.981 "zcopy": true, 00:06:59.981 "get_zone_info": false, 00:06:59.981 "zone_management": false, 00:06:59.981 "zone_append": false, 00:06:59.981 "compare": false, 00:06:59.981 "compare_and_write": false, 00:06:59.981 "abort": true, 00:06:59.981 "seek_hole": false, 00:06:59.981 "seek_data": false, 00:06:59.981 "copy": true, 00:06:59.981 "nvme_iov_md": false 00:06:59.981 }, 00:06:59.981 "memory_domains": [ 00:06:59.981 { 00:06:59.981 "dma_device_id": "system", 00:06:59.981 "dma_device_type": 1 00:06:59.981 }, 00:06:59.981 { 00:06:59.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:59.981 "dma_device_type": 2 00:06:59.981 } 00:06:59.981 ], 00:06:59.981 "driver_specific": {} 00:06:59.981 } 00:06:59.981 ] 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.981 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:59.981 "name": "Existed_Raid", 00:06:59.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.981 "strip_size_kb": 64, 00:06:59.981 "state": "configuring", 00:06:59.981 "raid_level": "raid0", 00:06:59.981 "superblock": false, 00:06:59.981 "num_base_bdevs": 2, 00:06:59.982 "num_base_bdevs_discovered": 1, 00:06:59.982 "num_base_bdevs_operational": 2, 00:06:59.982 "base_bdevs_list": [ 00:06:59.982 { 00:06:59.982 "name": "BaseBdev1", 00:06:59.982 "uuid": "ac26fc77-692a-41a5-8d4a-e77b5511ef9a", 00:06:59.982 "is_configured": true, 00:06:59.982 "data_offset": 0, 00:06:59.982 "data_size": 65536 00:06:59.982 }, 00:06:59.982 { 00:06:59.982 "name": "BaseBdev2", 00:06:59.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.982 "is_configured": false, 00:06:59.982 "data_offset": 0, 00:06:59.982 "data_size": 0 00:06:59.982 } 00:06:59.982 ] 00:06:59.982 }' 00:06:59.982 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:59.982 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.549 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:00.549 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.549 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.549 [2024-12-06 09:44:25.636040] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:00.549 [2024-12-06 09:44:25.636154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:00.550 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.550 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:00.550 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.550 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.550 [2024-12-06 09:44:25.648093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:00.550 [2024-12-06 09:44:25.650196] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:00.550 [2024-12-06 09:44:25.650238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:00.550 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.550 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:00.550 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:00.550 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:00.550 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:00.550 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:00.550 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:00.550 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:00.550 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:00.550 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:00.550 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:00.550 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:00.550 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:00.550 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:00.550 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:00.550 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.550 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.550 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.550 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:00.550 "name": "Existed_Raid", 00:07:00.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:00.550 "strip_size_kb": 64, 00:07:00.550 "state": "configuring", 00:07:00.550 "raid_level": "raid0", 00:07:00.550 "superblock": false, 00:07:00.550 "num_base_bdevs": 2, 00:07:00.550 "num_base_bdevs_discovered": 1, 00:07:00.550 "num_base_bdevs_operational": 2, 00:07:00.550 "base_bdevs_list": [ 00:07:00.550 { 00:07:00.550 "name": "BaseBdev1", 00:07:00.550 "uuid": "ac26fc77-692a-41a5-8d4a-e77b5511ef9a", 00:07:00.550 "is_configured": true, 00:07:00.550 "data_offset": 0, 00:07:00.550 "data_size": 65536 00:07:00.550 }, 00:07:00.550 { 00:07:00.550 "name": "BaseBdev2", 00:07:00.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:00.550 "is_configured": false, 00:07:00.550 "data_offset": 0, 00:07:00.550 "data_size": 0 00:07:00.550 } 00:07:00.550 ] 00:07:00.550 }' 00:07:00.550 09:44:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:00.550 09:44:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.808 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:00.808 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.808 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.067 [2024-12-06 09:44:26.109921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:01.067 [2024-12-06 09:44:26.110063] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:01.067 [2024-12-06 09:44:26.110090] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:01.067 [2024-12-06 09:44:26.110418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:01.067 [2024-12-06 09:44:26.110634] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:01.067 [2024-12-06 09:44:26.110683] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:01.067 [2024-12-06 09:44:26.110999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:01.067 BaseBdev2 00:07:01.067 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.067 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:01.067 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:01.067 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:01.067 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:01.067 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:01.067 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:01.067 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:01.067 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.067 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.067 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.067 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:01.067 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.067 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.067 [ 00:07:01.067 { 00:07:01.067 "name": "BaseBdev2", 00:07:01.067 "aliases": [ 00:07:01.067 "e4bc48ea-8b7b-473d-ba13-46ba5b3672a7" 00:07:01.067 ], 00:07:01.067 "product_name": "Malloc disk", 00:07:01.067 "block_size": 512, 00:07:01.067 "num_blocks": 65536, 00:07:01.067 "uuid": "e4bc48ea-8b7b-473d-ba13-46ba5b3672a7", 00:07:01.067 "assigned_rate_limits": { 00:07:01.067 "rw_ios_per_sec": 0, 00:07:01.067 "rw_mbytes_per_sec": 0, 00:07:01.067 "r_mbytes_per_sec": 0, 00:07:01.067 "w_mbytes_per_sec": 0 00:07:01.067 }, 00:07:01.067 "claimed": true, 00:07:01.067 "claim_type": "exclusive_write", 00:07:01.067 "zoned": false, 00:07:01.068 "supported_io_types": { 00:07:01.068 "read": true, 00:07:01.068 "write": true, 00:07:01.068 "unmap": true, 00:07:01.068 "flush": true, 00:07:01.068 "reset": true, 00:07:01.068 "nvme_admin": false, 00:07:01.068 "nvme_io": false, 00:07:01.068 "nvme_io_md": false, 00:07:01.068 "write_zeroes": true, 00:07:01.068 "zcopy": true, 00:07:01.068 "get_zone_info": false, 00:07:01.068 "zone_management": false, 00:07:01.068 "zone_append": false, 00:07:01.068 "compare": false, 00:07:01.068 "compare_and_write": false, 00:07:01.068 "abort": true, 00:07:01.068 "seek_hole": false, 00:07:01.068 "seek_data": false, 00:07:01.068 "copy": true, 00:07:01.068 "nvme_iov_md": false 00:07:01.068 }, 00:07:01.068 "memory_domains": [ 00:07:01.068 { 00:07:01.068 "dma_device_id": "system", 00:07:01.068 "dma_device_type": 1 00:07:01.068 }, 00:07:01.068 { 00:07:01.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.068 "dma_device_type": 2 00:07:01.068 } 00:07:01.068 ], 00:07:01.068 "driver_specific": {} 00:07:01.068 } 00:07:01.068 ] 00:07:01.068 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.068 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:01.068 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:01.068 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:01.068 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:01.068 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:01.068 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:01.068 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:01.068 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:01.068 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:01.068 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:01.068 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:01.068 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:01.068 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:01.068 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:01.068 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.068 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.068 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.068 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.068 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:01.068 "name": "Existed_Raid", 00:07:01.068 "uuid": "a5a3fe90-23a6-4785-af31-05bdfc9941e8", 00:07:01.068 "strip_size_kb": 64, 00:07:01.068 "state": "online", 00:07:01.068 "raid_level": "raid0", 00:07:01.068 "superblock": false, 00:07:01.068 "num_base_bdevs": 2, 00:07:01.068 "num_base_bdevs_discovered": 2, 00:07:01.068 "num_base_bdevs_operational": 2, 00:07:01.068 "base_bdevs_list": [ 00:07:01.068 { 00:07:01.068 "name": "BaseBdev1", 00:07:01.068 "uuid": "ac26fc77-692a-41a5-8d4a-e77b5511ef9a", 00:07:01.068 "is_configured": true, 00:07:01.068 "data_offset": 0, 00:07:01.068 "data_size": 65536 00:07:01.068 }, 00:07:01.068 { 00:07:01.068 "name": "BaseBdev2", 00:07:01.068 "uuid": "e4bc48ea-8b7b-473d-ba13-46ba5b3672a7", 00:07:01.068 "is_configured": true, 00:07:01.068 "data_offset": 0, 00:07:01.068 "data_size": 65536 00:07:01.068 } 00:07:01.068 ] 00:07:01.068 }' 00:07:01.068 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:01.068 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.328 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:01.328 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:01.328 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:01.328 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:01.328 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:01.328 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:01.328 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:01.328 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.328 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.328 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:01.328 [2024-12-06 09:44:26.541537] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:01.328 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.328 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:01.328 "name": "Existed_Raid", 00:07:01.328 "aliases": [ 00:07:01.328 "a5a3fe90-23a6-4785-af31-05bdfc9941e8" 00:07:01.328 ], 00:07:01.328 "product_name": "Raid Volume", 00:07:01.328 "block_size": 512, 00:07:01.328 "num_blocks": 131072, 00:07:01.328 "uuid": "a5a3fe90-23a6-4785-af31-05bdfc9941e8", 00:07:01.328 "assigned_rate_limits": { 00:07:01.328 "rw_ios_per_sec": 0, 00:07:01.328 "rw_mbytes_per_sec": 0, 00:07:01.328 "r_mbytes_per_sec": 0, 00:07:01.328 "w_mbytes_per_sec": 0 00:07:01.328 }, 00:07:01.328 "claimed": false, 00:07:01.328 "zoned": false, 00:07:01.328 "supported_io_types": { 00:07:01.328 "read": true, 00:07:01.328 "write": true, 00:07:01.328 "unmap": true, 00:07:01.328 "flush": true, 00:07:01.328 "reset": true, 00:07:01.328 "nvme_admin": false, 00:07:01.328 "nvme_io": false, 00:07:01.328 "nvme_io_md": false, 00:07:01.328 "write_zeroes": true, 00:07:01.328 "zcopy": false, 00:07:01.328 "get_zone_info": false, 00:07:01.328 "zone_management": false, 00:07:01.328 "zone_append": false, 00:07:01.328 "compare": false, 00:07:01.328 "compare_and_write": false, 00:07:01.328 "abort": false, 00:07:01.328 "seek_hole": false, 00:07:01.328 "seek_data": false, 00:07:01.328 "copy": false, 00:07:01.328 "nvme_iov_md": false 00:07:01.328 }, 00:07:01.328 "memory_domains": [ 00:07:01.328 { 00:07:01.328 "dma_device_id": "system", 00:07:01.328 "dma_device_type": 1 00:07:01.328 }, 00:07:01.328 { 00:07:01.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.328 "dma_device_type": 2 00:07:01.328 }, 00:07:01.328 { 00:07:01.328 "dma_device_id": "system", 00:07:01.328 "dma_device_type": 1 00:07:01.328 }, 00:07:01.328 { 00:07:01.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.328 "dma_device_type": 2 00:07:01.328 } 00:07:01.328 ], 00:07:01.328 "driver_specific": { 00:07:01.328 "raid": { 00:07:01.328 "uuid": "a5a3fe90-23a6-4785-af31-05bdfc9941e8", 00:07:01.328 "strip_size_kb": 64, 00:07:01.328 "state": "online", 00:07:01.328 "raid_level": "raid0", 00:07:01.328 "superblock": false, 00:07:01.328 "num_base_bdevs": 2, 00:07:01.328 "num_base_bdevs_discovered": 2, 00:07:01.328 "num_base_bdevs_operational": 2, 00:07:01.328 "base_bdevs_list": [ 00:07:01.328 { 00:07:01.328 "name": "BaseBdev1", 00:07:01.328 "uuid": "ac26fc77-692a-41a5-8d4a-e77b5511ef9a", 00:07:01.328 "is_configured": true, 00:07:01.328 "data_offset": 0, 00:07:01.328 "data_size": 65536 00:07:01.328 }, 00:07:01.328 { 00:07:01.328 "name": "BaseBdev2", 00:07:01.328 "uuid": "e4bc48ea-8b7b-473d-ba13-46ba5b3672a7", 00:07:01.328 "is_configured": true, 00:07:01.328 "data_offset": 0, 00:07:01.328 "data_size": 65536 00:07:01.328 } 00:07:01.328 ] 00:07:01.328 } 00:07:01.328 } 00:07:01.328 }' 00:07:01.328 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:01.588 BaseBdev2' 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.588 [2024-12-06 09:44:26.740967] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:01.588 [2024-12-06 09:44:26.741050] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:01.588 [2024-12-06 09:44:26.741126] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.588 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.847 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.847 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:01.847 "name": "Existed_Raid", 00:07:01.847 "uuid": "a5a3fe90-23a6-4785-af31-05bdfc9941e8", 00:07:01.847 "strip_size_kb": 64, 00:07:01.847 "state": "offline", 00:07:01.847 "raid_level": "raid0", 00:07:01.847 "superblock": false, 00:07:01.847 "num_base_bdevs": 2, 00:07:01.847 "num_base_bdevs_discovered": 1, 00:07:01.847 "num_base_bdevs_operational": 1, 00:07:01.847 "base_bdevs_list": [ 00:07:01.847 { 00:07:01.847 "name": null, 00:07:01.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:01.847 "is_configured": false, 00:07:01.847 "data_offset": 0, 00:07:01.847 "data_size": 65536 00:07:01.847 }, 00:07:01.847 { 00:07:01.847 "name": "BaseBdev2", 00:07:01.847 "uuid": "e4bc48ea-8b7b-473d-ba13-46ba5b3672a7", 00:07:01.847 "is_configured": true, 00:07:01.847 "data_offset": 0, 00:07:01.847 "data_size": 65536 00:07:01.847 } 00:07:01.847 ] 00:07:01.847 }' 00:07:01.847 09:44:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:01.847 09:44:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.105 09:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:02.105 09:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:02.105 09:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.105 09:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:02.105 09:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.105 09:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.105 09:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.105 09:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:02.105 09:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:02.105 09:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:02.105 09:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.105 09:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.105 [2024-12-06 09:44:27.306266] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:02.105 [2024-12-06 09:44:27.306367] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:02.364 09:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.365 09:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:02.365 09:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:02.365 09:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.365 09:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.365 09:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:02.365 09:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.365 09:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.365 09:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:02.365 09:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:02.365 09:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:02.365 09:44:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60681 00:07:02.365 09:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60681 ']' 00:07:02.365 09:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60681 00:07:02.365 09:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:02.365 09:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.365 09:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60681 00:07:02.365 killing process with pid 60681 00:07:02.365 09:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:02.365 09:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:02.365 09:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60681' 00:07:02.365 09:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60681 00:07:02.365 [2024-12-06 09:44:27.492028] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:02.365 09:44:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60681 00:07:02.365 [2024-12-06 09:44:27.507659] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:03.742 09:44:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:03.742 00:07:03.742 real 0m4.940s 00:07:03.742 user 0m7.105s 00:07:03.742 sys 0m0.730s 00:07:03.742 09:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.742 09:44:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.742 ************************************ 00:07:03.742 END TEST raid_state_function_test 00:07:03.742 ************************************ 00:07:03.742 09:44:28 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:03.742 09:44:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:03.742 09:44:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.742 09:44:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:03.742 ************************************ 00:07:03.742 START TEST raid_state_function_test_sb 00:07:03.742 ************************************ 00:07:03.742 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:03.742 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:03.742 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:03.742 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:03.742 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:03.742 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:03.742 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:03.742 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:03.742 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:03.742 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:03.742 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:03.742 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:03.742 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:03.742 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:03.742 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:03.742 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:03.742 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:03.742 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:03.743 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:03.743 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:03.743 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:03.743 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:03.743 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:03.743 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:03.743 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60929 00:07:03.743 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:03.743 Process raid pid: 60929 00:07:03.743 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60929' 00:07:03.743 09:44:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60929 00:07:03.743 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60929 ']' 00:07:03.743 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.743 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.743 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.743 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.743 09:44:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.743 [2024-12-06 09:44:28.819588] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:03.743 [2024-12-06 09:44:28.819710] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:03.743 [2024-12-06 09:44:28.988442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.002 [2024-12-06 09:44:29.113844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.261 [2024-12-06 09:44:29.327188] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.261 [2024-12-06 09:44:29.327232] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.519 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.519 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:04.519 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:04.519 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.519 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.519 [2024-12-06 09:44:29.696038] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:04.519 [2024-12-06 09:44:29.696229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:04.519 [2024-12-06 09:44:29.696297] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:04.519 [2024-12-06 09:44:29.696356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:04.519 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.519 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:04.519 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:04.519 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:04.519 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:04.519 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:04.519 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:04.519 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:04.519 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:04.519 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:04.519 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:04.519 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:04.519 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.520 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.520 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.520 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.520 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:04.520 "name": "Existed_Raid", 00:07:04.520 "uuid": "00028c9d-48d7-4c44-97db-1471e0bf0a6d", 00:07:04.520 "strip_size_kb": 64, 00:07:04.520 "state": "configuring", 00:07:04.520 "raid_level": "raid0", 00:07:04.520 "superblock": true, 00:07:04.520 "num_base_bdevs": 2, 00:07:04.520 "num_base_bdevs_discovered": 0, 00:07:04.520 "num_base_bdevs_operational": 2, 00:07:04.520 "base_bdevs_list": [ 00:07:04.520 { 00:07:04.520 "name": "BaseBdev1", 00:07:04.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:04.520 "is_configured": false, 00:07:04.520 "data_offset": 0, 00:07:04.520 "data_size": 0 00:07:04.520 }, 00:07:04.520 { 00:07:04.520 "name": "BaseBdev2", 00:07:04.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:04.520 "is_configured": false, 00:07:04.520 "data_offset": 0, 00:07:04.520 "data_size": 0 00:07:04.520 } 00:07:04.520 ] 00:07:04.520 }' 00:07:04.520 09:44:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:04.520 09:44:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.102 [2024-12-06 09:44:30.143245] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:05.102 [2024-12-06 09:44:30.143333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.102 [2024-12-06 09:44:30.155232] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:05.102 [2024-12-06 09:44:30.155317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:05.102 [2024-12-06 09:44:30.155366] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:05.102 [2024-12-06 09:44:30.155396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.102 [2024-12-06 09:44:30.201059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:05.102 BaseBdev1 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.102 [ 00:07:05.102 { 00:07:05.102 "name": "BaseBdev1", 00:07:05.102 "aliases": [ 00:07:05.102 "0597be1d-4608-4933-8b99-c448f8bcef32" 00:07:05.102 ], 00:07:05.102 "product_name": "Malloc disk", 00:07:05.102 "block_size": 512, 00:07:05.102 "num_blocks": 65536, 00:07:05.102 "uuid": "0597be1d-4608-4933-8b99-c448f8bcef32", 00:07:05.102 "assigned_rate_limits": { 00:07:05.102 "rw_ios_per_sec": 0, 00:07:05.102 "rw_mbytes_per_sec": 0, 00:07:05.102 "r_mbytes_per_sec": 0, 00:07:05.102 "w_mbytes_per_sec": 0 00:07:05.102 }, 00:07:05.102 "claimed": true, 00:07:05.102 "claim_type": "exclusive_write", 00:07:05.102 "zoned": false, 00:07:05.102 "supported_io_types": { 00:07:05.102 "read": true, 00:07:05.102 "write": true, 00:07:05.102 "unmap": true, 00:07:05.102 "flush": true, 00:07:05.102 "reset": true, 00:07:05.102 "nvme_admin": false, 00:07:05.102 "nvme_io": false, 00:07:05.102 "nvme_io_md": false, 00:07:05.102 "write_zeroes": true, 00:07:05.102 "zcopy": true, 00:07:05.102 "get_zone_info": false, 00:07:05.102 "zone_management": false, 00:07:05.102 "zone_append": false, 00:07:05.102 "compare": false, 00:07:05.102 "compare_and_write": false, 00:07:05.102 "abort": true, 00:07:05.102 "seek_hole": false, 00:07:05.102 "seek_data": false, 00:07:05.102 "copy": true, 00:07:05.102 "nvme_iov_md": false 00:07:05.102 }, 00:07:05.102 "memory_domains": [ 00:07:05.102 { 00:07:05.102 "dma_device_id": "system", 00:07:05.102 "dma_device_type": 1 00:07:05.102 }, 00:07:05.102 { 00:07:05.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.102 "dma_device_type": 2 00:07:05.102 } 00:07:05.102 ], 00:07:05.102 "driver_specific": {} 00:07:05.102 } 00:07:05.102 ] 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.102 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:05.102 "name": "Existed_Raid", 00:07:05.102 "uuid": "e689b341-459b-4d51-8cfa-296de3064d2c", 00:07:05.102 "strip_size_kb": 64, 00:07:05.102 "state": "configuring", 00:07:05.102 "raid_level": "raid0", 00:07:05.102 "superblock": true, 00:07:05.102 "num_base_bdevs": 2, 00:07:05.103 "num_base_bdevs_discovered": 1, 00:07:05.103 "num_base_bdevs_operational": 2, 00:07:05.103 "base_bdevs_list": [ 00:07:05.103 { 00:07:05.103 "name": "BaseBdev1", 00:07:05.103 "uuid": "0597be1d-4608-4933-8b99-c448f8bcef32", 00:07:05.103 "is_configured": true, 00:07:05.103 "data_offset": 2048, 00:07:05.103 "data_size": 63488 00:07:05.103 }, 00:07:05.103 { 00:07:05.103 "name": "BaseBdev2", 00:07:05.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:05.103 "is_configured": false, 00:07:05.103 "data_offset": 0, 00:07:05.103 "data_size": 0 00:07:05.103 } 00:07:05.103 ] 00:07:05.103 }' 00:07:05.103 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:05.103 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.361 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:05.361 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.361 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.361 [2024-12-06 09:44:30.608413] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:05.361 [2024-12-06 09:44:30.608518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:05.361 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.361 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:05.361 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.361 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.361 [2024-12-06 09:44:30.620431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:05.361 [2024-12-06 09:44:30.622491] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:05.361 [2024-12-06 09:44:30.622534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:05.361 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.361 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:05.361 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:05.361 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:05.361 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:05.361 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:05.361 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:05.361 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:05.361 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:05.362 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:05.362 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:05.362 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:05.362 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:05.362 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.362 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:05.362 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.362 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.621 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.621 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:05.621 "name": "Existed_Raid", 00:07:05.621 "uuid": "045ae4b4-d69e-455f-b39e-8de095ebff43", 00:07:05.621 "strip_size_kb": 64, 00:07:05.621 "state": "configuring", 00:07:05.621 "raid_level": "raid0", 00:07:05.621 "superblock": true, 00:07:05.621 "num_base_bdevs": 2, 00:07:05.621 "num_base_bdevs_discovered": 1, 00:07:05.621 "num_base_bdevs_operational": 2, 00:07:05.621 "base_bdevs_list": [ 00:07:05.621 { 00:07:05.621 "name": "BaseBdev1", 00:07:05.621 "uuid": "0597be1d-4608-4933-8b99-c448f8bcef32", 00:07:05.621 "is_configured": true, 00:07:05.621 "data_offset": 2048, 00:07:05.621 "data_size": 63488 00:07:05.621 }, 00:07:05.621 { 00:07:05.621 "name": "BaseBdev2", 00:07:05.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:05.621 "is_configured": false, 00:07:05.621 "data_offset": 0, 00:07:05.621 "data_size": 0 00:07:05.621 } 00:07:05.621 ] 00:07:05.621 }' 00:07:05.621 09:44:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:05.621 09:44:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.881 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:05.881 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.881 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.881 [2024-12-06 09:44:31.095680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:05.881 [2024-12-06 09:44:31.096069] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:05.881 [2024-12-06 09:44:31.096128] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:05.881 [2024-12-06 09:44:31.096430] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:05.881 [2024-12-06 09:44:31.096639] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:05.881 [2024-12-06 09:44:31.096687] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev2 00:07:05.881 id_bdev 0x617000007e80 00:07:05.881 [2024-12-06 09:44:31.096891] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:05.881 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.881 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:05.881 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:05.881 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:05.881 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:05.881 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:05.881 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:05.881 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:05.881 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.881 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.881 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.881 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:05.881 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.881 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.881 [ 00:07:05.881 { 00:07:05.881 "name": "BaseBdev2", 00:07:05.881 "aliases": [ 00:07:05.881 "2bd773af-0edf-490c-ae35-9468e5cbcaf5" 00:07:05.881 ], 00:07:05.881 "product_name": "Malloc disk", 00:07:05.881 "block_size": 512, 00:07:05.881 "num_blocks": 65536, 00:07:05.881 "uuid": "2bd773af-0edf-490c-ae35-9468e5cbcaf5", 00:07:05.881 "assigned_rate_limits": { 00:07:05.881 "rw_ios_per_sec": 0, 00:07:05.881 "rw_mbytes_per_sec": 0, 00:07:05.881 "r_mbytes_per_sec": 0, 00:07:05.881 "w_mbytes_per_sec": 0 00:07:05.881 }, 00:07:05.881 "claimed": true, 00:07:05.881 "claim_type": "exclusive_write", 00:07:05.881 "zoned": false, 00:07:05.881 "supported_io_types": { 00:07:05.881 "read": true, 00:07:05.881 "write": true, 00:07:05.881 "unmap": true, 00:07:05.881 "flush": true, 00:07:05.881 "reset": true, 00:07:05.881 "nvme_admin": false, 00:07:05.881 "nvme_io": false, 00:07:05.881 "nvme_io_md": false, 00:07:05.881 "write_zeroes": true, 00:07:05.881 "zcopy": true, 00:07:05.881 "get_zone_info": false, 00:07:05.881 "zone_management": false, 00:07:05.881 "zone_append": false, 00:07:05.881 "compare": false, 00:07:05.881 "compare_and_write": false, 00:07:05.881 "abort": true, 00:07:05.881 "seek_hole": false, 00:07:05.881 "seek_data": false, 00:07:05.881 "copy": true, 00:07:05.881 "nvme_iov_md": false 00:07:05.881 }, 00:07:05.881 "memory_domains": [ 00:07:05.881 { 00:07:05.881 "dma_device_id": "system", 00:07:05.881 "dma_device_type": 1 00:07:05.881 }, 00:07:05.881 { 00:07:05.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.881 "dma_device_type": 2 00:07:05.881 } 00:07:05.881 ], 00:07:05.881 "driver_specific": {} 00:07:05.881 } 00:07:05.881 ] 00:07:05.881 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.881 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:05.881 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:05.881 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:05.881 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:05.881 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:05.881 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:05.881 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:05.881 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:05.881 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:05.881 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:05.881 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:05.881 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:05.881 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:05.881 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.881 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:05.881 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.881 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.141 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.141 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:06.141 "name": "Existed_Raid", 00:07:06.141 "uuid": "045ae4b4-d69e-455f-b39e-8de095ebff43", 00:07:06.141 "strip_size_kb": 64, 00:07:06.141 "state": "online", 00:07:06.141 "raid_level": "raid0", 00:07:06.141 "superblock": true, 00:07:06.141 "num_base_bdevs": 2, 00:07:06.141 "num_base_bdevs_discovered": 2, 00:07:06.141 "num_base_bdevs_operational": 2, 00:07:06.141 "base_bdevs_list": [ 00:07:06.141 { 00:07:06.141 "name": "BaseBdev1", 00:07:06.141 "uuid": "0597be1d-4608-4933-8b99-c448f8bcef32", 00:07:06.141 "is_configured": true, 00:07:06.141 "data_offset": 2048, 00:07:06.141 "data_size": 63488 00:07:06.141 }, 00:07:06.141 { 00:07:06.141 "name": "BaseBdev2", 00:07:06.141 "uuid": "2bd773af-0edf-490c-ae35-9468e5cbcaf5", 00:07:06.141 "is_configured": true, 00:07:06.141 "data_offset": 2048, 00:07:06.142 "data_size": 63488 00:07:06.142 } 00:07:06.142 ] 00:07:06.142 }' 00:07:06.142 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:06.142 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.402 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:06.402 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:06.402 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:06.402 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:06.402 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:06.402 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:06.402 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:06.402 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:06.402 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.402 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.402 [2024-12-06 09:44:31.591182] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:06.402 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.402 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:06.402 "name": "Existed_Raid", 00:07:06.402 "aliases": [ 00:07:06.402 "045ae4b4-d69e-455f-b39e-8de095ebff43" 00:07:06.402 ], 00:07:06.402 "product_name": "Raid Volume", 00:07:06.402 "block_size": 512, 00:07:06.402 "num_blocks": 126976, 00:07:06.402 "uuid": "045ae4b4-d69e-455f-b39e-8de095ebff43", 00:07:06.402 "assigned_rate_limits": { 00:07:06.402 "rw_ios_per_sec": 0, 00:07:06.402 "rw_mbytes_per_sec": 0, 00:07:06.402 "r_mbytes_per_sec": 0, 00:07:06.402 "w_mbytes_per_sec": 0 00:07:06.402 }, 00:07:06.402 "claimed": false, 00:07:06.402 "zoned": false, 00:07:06.402 "supported_io_types": { 00:07:06.402 "read": true, 00:07:06.402 "write": true, 00:07:06.402 "unmap": true, 00:07:06.402 "flush": true, 00:07:06.402 "reset": true, 00:07:06.402 "nvme_admin": false, 00:07:06.402 "nvme_io": false, 00:07:06.402 "nvme_io_md": false, 00:07:06.402 "write_zeroes": true, 00:07:06.402 "zcopy": false, 00:07:06.402 "get_zone_info": false, 00:07:06.402 "zone_management": false, 00:07:06.402 "zone_append": false, 00:07:06.402 "compare": false, 00:07:06.402 "compare_and_write": false, 00:07:06.402 "abort": false, 00:07:06.402 "seek_hole": false, 00:07:06.402 "seek_data": false, 00:07:06.402 "copy": false, 00:07:06.402 "nvme_iov_md": false 00:07:06.402 }, 00:07:06.402 "memory_domains": [ 00:07:06.402 { 00:07:06.402 "dma_device_id": "system", 00:07:06.402 "dma_device_type": 1 00:07:06.402 }, 00:07:06.402 { 00:07:06.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.402 "dma_device_type": 2 00:07:06.402 }, 00:07:06.402 { 00:07:06.402 "dma_device_id": "system", 00:07:06.402 "dma_device_type": 1 00:07:06.402 }, 00:07:06.402 { 00:07:06.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.402 "dma_device_type": 2 00:07:06.402 } 00:07:06.402 ], 00:07:06.402 "driver_specific": { 00:07:06.402 "raid": { 00:07:06.402 "uuid": "045ae4b4-d69e-455f-b39e-8de095ebff43", 00:07:06.402 "strip_size_kb": 64, 00:07:06.402 "state": "online", 00:07:06.402 "raid_level": "raid0", 00:07:06.402 "superblock": true, 00:07:06.402 "num_base_bdevs": 2, 00:07:06.402 "num_base_bdevs_discovered": 2, 00:07:06.402 "num_base_bdevs_operational": 2, 00:07:06.402 "base_bdevs_list": [ 00:07:06.402 { 00:07:06.402 "name": "BaseBdev1", 00:07:06.402 "uuid": "0597be1d-4608-4933-8b99-c448f8bcef32", 00:07:06.402 "is_configured": true, 00:07:06.402 "data_offset": 2048, 00:07:06.402 "data_size": 63488 00:07:06.402 }, 00:07:06.402 { 00:07:06.402 "name": "BaseBdev2", 00:07:06.402 "uuid": "2bd773af-0edf-490c-ae35-9468e5cbcaf5", 00:07:06.402 "is_configured": true, 00:07:06.402 "data_offset": 2048, 00:07:06.402 "data_size": 63488 00:07:06.402 } 00:07:06.402 ] 00:07:06.402 } 00:07:06.402 } 00:07:06.402 }' 00:07:06.402 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:06.663 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:06.663 BaseBdev2' 00:07:06.663 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:06.663 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:06.663 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:06.663 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:06.663 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.663 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.663 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:06.663 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.663 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:06.663 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:06.663 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:06.663 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:06.663 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:06.663 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.663 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.664 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.664 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:06.664 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:06.664 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:06.664 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.664 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.664 [2024-12-06 09:44:31.822554] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:06.664 [2024-12-06 09:44:31.822638] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:06.664 [2024-12-06 09:44:31.822715] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:06.664 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.664 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:06.664 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:06.664 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:06.664 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:06.664 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:06.664 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:06.664 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:06.664 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:06.664 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:06.664 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:06.664 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:06.664 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:06.664 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:06.664 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:06.664 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:06.664 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.664 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.664 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:06.664 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.924 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.924 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:06.924 "name": "Existed_Raid", 00:07:06.924 "uuid": "045ae4b4-d69e-455f-b39e-8de095ebff43", 00:07:06.924 "strip_size_kb": 64, 00:07:06.924 "state": "offline", 00:07:06.924 "raid_level": "raid0", 00:07:06.924 "superblock": true, 00:07:06.924 "num_base_bdevs": 2, 00:07:06.924 "num_base_bdevs_discovered": 1, 00:07:06.924 "num_base_bdevs_operational": 1, 00:07:06.924 "base_bdevs_list": [ 00:07:06.924 { 00:07:06.924 "name": null, 00:07:06.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:06.924 "is_configured": false, 00:07:06.924 "data_offset": 0, 00:07:06.924 "data_size": 63488 00:07:06.924 }, 00:07:06.924 { 00:07:06.924 "name": "BaseBdev2", 00:07:06.924 "uuid": "2bd773af-0edf-490c-ae35-9468e5cbcaf5", 00:07:06.924 "is_configured": true, 00:07:06.924 "data_offset": 2048, 00:07:06.924 "data_size": 63488 00:07:06.924 } 00:07:06.924 ] 00:07:06.924 }' 00:07:06.924 09:44:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:06.924 09:44:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.184 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:07.184 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:07.184 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:07.184 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.184 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.184 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.184 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.184 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:07.184 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:07.184 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:07.184 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.184 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.184 [2024-12-06 09:44:32.385331] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:07.184 [2024-12-06 09:44:32.385449] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:07.444 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.444 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:07.444 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:07.444 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.444 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.444 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:07.444 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.444 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.444 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:07.444 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:07.444 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:07.444 09:44:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60929 00:07:07.444 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60929 ']' 00:07:07.444 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60929 00:07:07.444 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:07.444 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:07.444 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60929 00:07:07.444 killing process with pid 60929 00:07:07.444 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:07.444 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:07.444 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60929' 00:07:07.444 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60929 00:07:07.444 [2024-12-06 09:44:32.570429] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:07.444 09:44:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60929 00:07:07.444 [2024-12-06 09:44:32.589078] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:08.825 ************************************ 00:07:08.825 END TEST raid_state_function_test_sb 00:07:08.825 ************************************ 00:07:08.825 09:44:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:08.825 00:07:08.825 real 0m5.010s 00:07:08.825 user 0m7.248s 00:07:08.825 sys 0m0.761s 00:07:08.825 09:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.825 09:44:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:08.825 09:44:33 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:08.825 09:44:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:08.825 09:44:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.825 09:44:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:08.825 ************************************ 00:07:08.825 START TEST raid_superblock_test 00:07:08.825 ************************************ 00:07:08.825 09:44:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:08.825 09:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:08.825 09:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:08.825 09:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:08.825 09:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:08.825 09:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:08.825 09:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:08.825 09:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:08.825 09:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:08.825 09:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:08.825 09:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:08.826 09:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:08.826 09:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:08.826 09:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:08.826 09:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:08.826 09:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:08.826 09:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:08.826 09:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61181 00:07:08.826 09:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:08.826 09:44:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61181 00:07:08.826 09:44:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61181 ']' 00:07:08.826 09:44:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.826 09:44:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.826 09:44:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.826 09:44:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.826 09:44:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.826 [2024-12-06 09:44:33.889414] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:08.826 [2024-12-06 09:44:33.889631] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61181 ] 00:07:08.826 [2024-12-06 09:44:34.042031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.085 [2024-12-06 09:44:34.160698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.344 [2024-12-06 09:44:34.371186] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.344 [2024-12-06 09:44:34.371333] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.604 09:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.604 09:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:09.604 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:09.604 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:09.604 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:09.604 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:09.604 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:09.604 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:09.604 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:09.604 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:09.604 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:09.604 09:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.604 09:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.604 malloc1 00:07:09.604 09:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.604 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:09.604 09:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.604 09:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.604 [2024-12-06 09:44:34.787694] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:09.604 [2024-12-06 09:44:34.787776] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:09.605 [2024-12-06 09:44:34.787802] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:09.605 [2024-12-06 09:44:34.787812] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:09.605 [2024-12-06 09:44:34.790102] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:09.605 [2024-12-06 09:44:34.790154] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:09.605 pt1 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.605 malloc2 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.605 [2024-12-06 09:44:34.845074] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:09.605 [2024-12-06 09:44:34.845200] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:09.605 [2024-12-06 09:44:34.845234] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:09.605 [2024-12-06 09:44:34.845245] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:09.605 [2024-12-06 09:44:34.847585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:09.605 [2024-12-06 09:44:34.847656] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:09.605 pt2 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.605 [2024-12-06 09:44:34.857083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:09.605 [2024-12-06 09:44:34.859065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:09.605 [2024-12-06 09:44:34.859310] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:09.605 [2024-12-06 09:44:34.859365] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:09.605 [2024-12-06 09:44:34.859677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:09.605 [2024-12-06 09:44:34.859891] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:09.605 [2024-12-06 09:44:34.859939] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:09.605 [2024-12-06 09:44:34.860172] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.605 09:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.865 09:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.865 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:09.865 "name": "raid_bdev1", 00:07:09.865 "uuid": "49eaa40d-b585-4bec-8eca-00c7ca9e8961", 00:07:09.865 "strip_size_kb": 64, 00:07:09.865 "state": "online", 00:07:09.865 "raid_level": "raid0", 00:07:09.865 "superblock": true, 00:07:09.865 "num_base_bdevs": 2, 00:07:09.865 "num_base_bdevs_discovered": 2, 00:07:09.865 "num_base_bdevs_operational": 2, 00:07:09.865 "base_bdevs_list": [ 00:07:09.865 { 00:07:09.865 "name": "pt1", 00:07:09.865 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:09.865 "is_configured": true, 00:07:09.865 "data_offset": 2048, 00:07:09.865 "data_size": 63488 00:07:09.865 }, 00:07:09.865 { 00:07:09.865 "name": "pt2", 00:07:09.865 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:09.865 "is_configured": true, 00:07:09.865 "data_offset": 2048, 00:07:09.865 "data_size": 63488 00:07:09.865 } 00:07:09.865 ] 00:07:09.865 }' 00:07:09.865 09:44:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:09.865 09:44:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.125 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:10.125 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:10.125 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:10.125 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:10.125 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:10.125 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:10.125 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:10.125 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:10.125 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.125 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.125 [2024-12-06 09:44:35.292668] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:10.125 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.125 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:10.125 "name": "raid_bdev1", 00:07:10.125 "aliases": [ 00:07:10.125 "49eaa40d-b585-4bec-8eca-00c7ca9e8961" 00:07:10.125 ], 00:07:10.125 "product_name": "Raid Volume", 00:07:10.125 "block_size": 512, 00:07:10.125 "num_blocks": 126976, 00:07:10.125 "uuid": "49eaa40d-b585-4bec-8eca-00c7ca9e8961", 00:07:10.125 "assigned_rate_limits": { 00:07:10.125 "rw_ios_per_sec": 0, 00:07:10.125 "rw_mbytes_per_sec": 0, 00:07:10.125 "r_mbytes_per_sec": 0, 00:07:10.125 "w_mbytes_per_sec": 0 00:07:10.125 }, 00:07:10.125 "claimed": false, 00:07:10.125 "zoned": false, 00:07:10.125 "supported_io_types": { 00:07:10.125 "read": true, 00:07:10.125 "write": true, 00:07:10.125 "unmap": true, 00:07:10.125 "flush": true, 00:07:10.125 "reset": true, 00:07:10.125 "nvme_admin": false, 00:07:10.125 "nvme_io": false, 00:07:10.125 "nvme_io_md": false, 00:07:10.125 "write_zeroes": true, 00:07:10.125 "zcopy": false, 00:07:10.125 "get_zone_info": false, 00:07:10.125 "zone_management": false, 00:07:10.125 "zone_append": false, 00:07:10.125 "compare": false, 00:07:10.125 "compare_and_write": false, 00:07:10.125 "abort": false, 00:07:10.125 "seek_hole": false, 00:07:10.125 "seek_data": false, 00:07:10.125 "copy": false, 00:07:10.125 "nvme_iov_md": false 00:07:10.125 }, 00:07:10.125 "memory_domains": [ 00:07:10.125 { 00:07:10.125 "dma_device_id": "system", 00:07:10.125 "dma_device_type": 1 00:07:10.125 }, 00:07:10.125 { 00:07:10.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.125 "dma_device_type": 2 00:07:10.125 }, 00:07:10.125 { 00:07:10.125 "dma_device_id": "system", 00:07:10.125 "dma_device_type": 1 00:07:10.125 }, 00:07:10.125 { 00:07:10.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.125 "dma_device_type": 2 00:07:10.125 } 00:07:10.125 ], 00:07:10.125 "driver_specific": { 00:07:10.125 "raid": { 00:07:10.125 "uuid": "49eaa40d-b585-4bec-8eca-00c7ca9e8961", 00:07:10.125 "strip_size_kb": 64, 00:07:10.125 "state": "online", 00:07:10.125 "raid_level": "raid0", 00:07:10.125 "superblock": true, 00:07:10.125 "num_base_bdevs": 2, 00:07:10.125 "num_base_bdevs_discovered": 2, 00:07:10.125 "num_base_bdevs_operational": 2, 00:07:10.125 "base_bdevs_list": [ 00:07:10.125 { 00:07:10.125 "name": "pt1", 00:07:10.125 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:10.125 "is_configured": true, 00:07:10.125 "data_offset": 2048, 00:07:10.125 "data_size": 63488 00:07:10.125 }, 00:07:10.125 { 00:07:10.125 "name": "pt2", 00:07:10.125 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:10.125 "is_configured": true, 00:07:10.125 "data_offset": 2048, 00:07:10.125 "data_size": 63488 00:07:10.125 } 00:07:10.125 ] 00:07:10.125 } 00:07:10.125 } 00:07:10.125 }' 00:07:10.125 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:10.125 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:10.125 pt2' 00:07:10.125 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:10.384 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:10.384 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:10.384 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:10.384 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.385 [2024-12-06 09:44:35.504323] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=49eaa40d-b585-4bec-8eca-00c7ca9e8961 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 49eaa40d-b585-4bec-8eca-00c7ca9e8961 ']' 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.385 [2024-12-06 09:44:35.531912] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:10.385 [2024-12-06 09:44:35.531976] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:10.385 [2024-12-06 09:44:35.532104] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:10.385 [2024-12-06 09:44:35.532184] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:10.385 [2024-12-06 09:44:35.532256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.385 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.645 [2024-12-06 09:44:35.667732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:10.645 [2024-12-06 09:44:35.669757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:10.645 [2024-12-06 09:44:35.669870] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:10.645 [2024-12-06 09:44:35.669959] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:10.645 [2024-12-06 09:44:35.670026] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:10.645 [2024-12-06 09:44:35.670073] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:10.645 request: 00:07:10.645 { 00:07:10.645 "name": "raid_bdev1", 00:07:10.645 "raid_level": "raid0", 00:07:10.645 "base_bdevs": [ 00:07:10.645 "malloc1", 00:07:10.645 "malloc2" 00:07:10.645 ], 00:07:10.645 "strip_size_kb": 64, 00:07:10.645 "superblock": false, 00:07:10.645 "method": "bdev_raid_create", 00:07:10.645 "req_id": 1 00:07:10.645 } 00:07:10.645 Got JSON-RPC error response 00:07:10.645 response: 00:07:10.645 { 00:07:10.645 "code": -17, 00:07:10.645 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:10.645 } 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.645 [2024-12-06 09:44:35.723649] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:10.645 [2024-12-06 09:44:35.723769] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:10.645 [2024-12-06 09:44:35.723808] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:10.645 [2024-12-06 09:44:35.723842] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:10.645 [2024-12-06 09:44:35.726269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:10.645 [2024-12-06 09:44:35.726351] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:10.645 [2024-12-06 09:44:35.726472] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:10.645 [2024-12-06 09:44:35.726585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:10.645 pt1 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.645 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:10.645 "name": "raid_bdev1", 00:07:10.645 "uuid": "49eaa40d-b585-4bec-8eca-00c7ca9e8961", 00:07:10.645 "strip_size_kb": 64, 00:07:10.645 "state": "configuring", 00:07:10.645 "raid_level": "raid0", 00:07:10.645 "superblock": true, 00:07:10.645 "num_base_bdevs": 2, 00:07:10.645 "num_base_bdevs_discovered": 1, 00:07:10.645 "num_base_bdevs_operational": 2, 00:07:10.645 "base_bdevs_list": [ 00:07:10.645 { 00:07:10.645 "name": "pt1", 00:07:10.645 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:10.645 "is_configured": true, 00:07:10.645 "data_offset": 2048, 00:07:10.645 "data_size": 63488 00:07:10.645 }, 00:07:10.645 { 00:07:10.645 "name": null, 00:07:10.645 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:10.645 "is_configured": false, 00:07:10.645 "data_offset": 2048, 00:07:10.645 "data_size": 63488 00:07:10.645 } 00:07:10.645 ] 00:07:10.645 }' 00:07:10.646 09:44:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:10.646 09:44:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.231 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:11.231 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:11.231 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:11.231 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:11.231 09:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.231 09:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.231 [2024-12-06 09:44:36.206840] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:11.231 [2024-12-06 09:44:36.206973] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:11.231 [2024-12-06 09:44:36.207014] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:11.231 [2024-12-06 09:44:36.207045] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:11.231 [2024-12-06 09:44:36.207610] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:11.231 [2024-12-06 09:44:36.207680] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:11.231 [2024-12-06 09:44:36.207801] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:11.231 [2024-12-06 09:44:36.207865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:11.231 [2024-12-06 09:44:36.208031] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:11.231 [2024-12-06 09:44:36.208076] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:11.231 [2024-12-06 09:44:36.208368] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:11.231 [2024-12-06 09:44:36.208560] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:11.231 [2024-12-06 09:44:36.208604] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:11.231 [2024-12-06 09:44:36.208813] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:11.231 pt2 00:07:11.231 09:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.231 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:11.231 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:11.231 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:11.231 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:11.231 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:11.231 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:11.231 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:11.231 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:11.231 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:11.231 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:11.231 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:11.231 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:11.231 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.231 09:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.231 09:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.231 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:11.231 09:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.231 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:11.231 "name": "raid_bdev1", 00:07:11.231 "uuid": "49eaa40d-b585-4bec-8eca-00c7ca9e8961", 00:07:11.231 "strip_size_kb": 64, 00:07:11.231 "state": "online", 00:07:11.231 "raid_level": "raid0", 00:07:11.231 "superblock": true, 00:07:11.231 "num_base_bdevs": 2, 00:07:11.231 "num_base_bdevs_discovered": 2, 00:07:11.231 "num_base_bdevs_operational": 2, 00:07:11.231 "base_bdevs_list": [ 00:07:11.231 { 00:07:11.231 "name": "pt1", 00:07:11.231 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:11.231 "is_configured": true, 00:07:11.231 "data_offset": 2048, 00:07:11.231 "data_size": 63488 00:07:11.231 }, 00:07:11.231 { 00:07:11.231 "name": "pt2", 00:07:11.231 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:11.231 "is_configured": true, 00:07:11.231 "data_offset": 2048, 00:07:11.231 "data_size": 63488 00:07:11.231 } 00:07:11.231 ] 00:07:11.231 }' 00:07:11.231 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:11.231 09:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.556 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:11.556 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:11.556 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:11.556 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:11.556 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:11.556 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:11.556 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:11.556 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:11.556 09:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.556 09:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.556 [2024-12-06 09:44:36.686340] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:11.556 09:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.556 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:11.556 "name": "raid_bdev1", 00:07:11.556 "aliases": [ 00:07:11.556 "49eaa40d-b585-4bec-8eca-00c7ca9e8961" 00:07:11.556 ], 00:07:11.556 "product_name": "Raid Volume", 00:07:11.556 "block_size": 512, 00:07:11.556 "num_blocks": 126976, 00:07:11.556 "uuid": "49eaa40d-b585-4bec-8eca-00c7ca9e8961", 00:07:11.556 "assigned_rate_limits": { 00:07:11.556 "rw_ios_per_sec": 0, 00:07:11.556 "rw_mbytes_per_sec": 0, 00:07:11.556 "r_mbytes_per_sec": 0, 00:07:11.556 "w_mbytes_per_sec": 0 00:07:11.556 }, 00:07:11.556 "claimed": false, 00:07:11.556 "zoned": false, 00:07:11.556 "supported_io_types": { 00:07:11.556 "read": true, 00:07:11.556 "write": true, 00:07:11.556 "unmap": true, 00:07:11.556 "flush": true, 00:07:11.556 "reset": true, 00:07:11.556 "nvme_admin": false, 00:07:11.556 "nvme_io": false, 00:07:11.556 "nvme_io_md": false, 00:07:11.556 "write_zeroes": true, 00:07:11.556 "zcopy": false, 00:07:11.556 "get_zone_info": false, 00:07:11.556 "zone_management": false, 00:07:11.556 "zone_append": false, 00:07:11.556 "compare": false, 00:07:11.556 "compare_and_write": false, 00:07:11.556 "abort": false, 00:07:11.556 "seek_hole": false, 00:07:11.556 "seek_data": false, 00:07:11.556 "copy": false, 00:07:11.556 "nvme_iov_md": false 00:07:11.556 }, 00:07:11.556 "memory_domains": [ 00:07:11.556 { 00:07:11.556 "dma_device_id": "system", 00:07:11.556 "dma_device_type": 1 00:07:11.556 }, 00:07:11.556 { 00:07:11.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.556 "dma_device_type": 2 00:07:11.556 }, 00:07:11.556 { 00:07:11.556 "dma_device_id": "system", 00:07:11.556 "dma_device_type": 1 00:07:11.556 }, 00:07:11.556 { 00:07:11.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.556 "dma_device_type": 2 00:07:11.556 } 00:07:11.556 ], 00:07:11.556 "driver_specific": { 00:07:11.556 "raid": { 00:07:11.556 "uuid": "49eaa40d-b585-4bec-8eca-00c7ca9e8961", 00:07:11.556 "strip_size_kb": 64, 00:07:11.556 "state": "online", 00:07:11.556 "raid_level": "raid0", 00:07:11.556 "superblock": true, 00:07:11.556 "num_base_bdevs": 2, 00:07:11.556 "num_base_bdevs_discovered": 2, 00:07:11.556 "num_base_bdevs_operational": 2, 00:07:11.556 "base_bdevs_list": [ 00:07:11.556 { 00:07:11.556 "name": "pt1", 00:07:11.556 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:11.556 "is_configured": true, 00:07:11.556 "data_offset": 2048, 00:07:11.556 "data_size": 63488 00:07:11.556 }, 00:07:11.556 { 00:07:11.556 "name": "pt2", 00:07:11.556 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:11.556 "is_configured": true, 00:07:11.556 "data_offset": 2048, 00:07:11.556 "data_size": 63488 00:07:11.556 } 00:07:11.556 ] 00:07:11.556 } 00:07:11.556 } 00:07:11.556 }' 00:07:11.556 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:11.556 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:11.556 pt2' 00:07:11.556 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.838 [2024-12-06 09:44:36.917952] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 49eaa40d-b585-4bec-8eca-00c7ca9e8961 '!=' 49eaa40d-b585-4bec-8eca-00c7ca9e8961 ']' 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61181 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61181 ']' 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61181 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61181 00:07:11.838 killing process with pid 61181 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61181' 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61181 00:07:11.838 [2024-12-06 09:44:37.000132] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:11.838 [2024-12-06 09:44:37.000243] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:11.838 [2024-12-06 09:44:37.000299] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:11.838 [2024-12-06 09:44:37.000312] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:11.838 09:44:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61181 00:07:12.097 [2024-12-06 09:44:37.217930] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:13.477 09:44:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:13.477 00:07:13.477 real 0m4.552s 00:07:13.477 user 0m6.400s 00:07:13.477 sys 0m0.758s 00:07:13.477 ************************************ 00:07:13.477 END TEST raid_superblock_test 00:07:13.477 ************************************ 00:07:13.477 09:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.477 09:44:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.477 09:44:38 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:13.477 09:44:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:13.477 09:44:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.477 09:44:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:13.477 ************************************ 00:07:13.477 START TEST raid_read_error_test 00:07:13.477 ************************************ 00:07:13.477 09:44:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:07:13.477 09:44:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:13.477 09:44:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:13.477 09:44:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:13.477 09:44:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:13.477 09:44:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:13.477 09:44:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:13.477 09:44:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:13.477 09:44:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:13.477 09:44:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:13.477 09:44:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:13.477 09:44:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:13.477 09:44:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:13.477 09:44:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:13.477 09:44:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:13.477 09:44:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:13.477 09:44:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:13.477 09:44:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:13.477 09:44:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:13.477 09:44:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:13.477 09:44:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:13.477 09:44:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:13.477 09:44:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:13.477 09:44:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.RbZoUdVGyj 00:07:13.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.477 09:44:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61391 00:07:13.477 09:44:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:13.477 09:44:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61391 00:07:13.477 09:44:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61391 ']' 00:07:13.477 09:44:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.477 09:44:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.477 09:44:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.477 09:44:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.477 09:44:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.477 [2024-12-06 09:44:38.521548] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:13.477 [2024-12-06 09:44:38.521751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61391 ] 00:07:13.477 [2024-12-06 09:44:38.694747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.737 [2024-12-06 09:44:38.808966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.996 [2024-12-06 09:44:39.015700] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.996 [2024-12-06 09:44:39.015770] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.256 09:44:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.256 09:44:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:14.256 09:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:14.256 09:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:14.256 09:44:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.256 09:44:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.256 BaseBdev1_malloc 00:07:14.256 09:44:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.256 09:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:14.256 09:44:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.256 09:44:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.256 true 00:07:14.256 09:44:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.256 09:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:14.256 09:44:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.256 09:44:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.256 [2024-12-06 09:44:39.411885] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:14.256 [2024-12-06 09:44:39.411988] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:14.256 [2024-12-06 09:44:39.412026] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:14.256 [2024-12-06 09:44:39.412056] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:14.256 [2024-12-06 09:44:39.414072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:14.256 [2024-12-06 09:44:39.414157] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:14.256 BaseBdev1 00:07:14.256 09:44:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.256 09:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:14.256 09:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:14.256 09:44:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.256 09:44:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.256 BaseBdev2_malloc 00:07:14.256 09:44:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.256 09:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:14.256 09:44:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.256 09:44:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.256 true 00:07:14.256 09:44:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.256 09:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:14.256 09:44:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.256 09:44:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.256 [2024-12-06 09:44:39.478524] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:14.256 [2024-12-06 09:44:39.478627] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:14.256 [2024-12-06 09:44:39.478679] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:14.256 [2024-12-06 09:44:39.478709] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:14.256 [2024-12-06 09:44:39.480821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:14.256 [2024-12-06 09:44:39.480899] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:14.256 BaseBdev2 00:07:14.256 09:44:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.256 09:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:14.256 09:44:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.256 09:44:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.256 [2024-12-06 09:44:39.490575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:14.256 [2024-12-06 09:44:39.492632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:14.256 [2024-12-06 09:44:39.492873] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:14.256 [2024-12-06 09:44:39.492927] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:14.256 [2024-12-06 09:44:39.493208] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:14.256 [2024-12-06 09:44:39.493430] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:14.257 [2024-12-06 09:44:39.493477] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:14.257 [2024-12-06 09:44:39.493706] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:14.257 09:44:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.257 09:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:14.257 09:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:14.257 09:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:14.257 09:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:14.257 09:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.257 09:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:14.257 09:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.257 09:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.257 09:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.257 09:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.257 09:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:14.257 09:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.257 09:44:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.257 09:44:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.257 09:44:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.516 09:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.516 "name": "raid_bdev1", 00:07:14.516 "uuid": "28527599-f403-4308-8fda-cd2fcae00739", 00:07:14.516 "strip_size_kb": 64, 00:07:14.516 "state": "online", 00:07:14.516 "raid_level": "raid0", 00:07:14.516 "superblock": true, 00:07:14.516 "num_base_bdevs": 2, 00:07:14.516 "num_base_bdevs_discovered": 2, 00:07:14.516 "num_base_bdevs_operational": 2, 00:07:14.516 "base_bdevs_list": [ 00:07:14.516 { 00:07:14.516 "name": "BaseBdev1", 00:07:14.516 "uuid": "70a3cafd-6be2-57d3-9ba8-26a8dcd34c7a", 00:07:14.516 "is_configured": true, 00:07:14.516 "data_offset": 2048, 00:07:14.516 "data_size": 63488 00:07:14.516 }, 00:07:14.516 { 00:07:14.516 "name": "BaseBdev2", 00:07:14.516 "uuid": "36eeaa00-64b6-56c8-bf60-260e42825810", 00:07:14.516 "is_configured": true, 00:07:14.516 "data_offset": 2048, 00:07:14.516 "data_size": 63488 00:07:14.516 } 00:07:14.516 ] 00:07:14.516 }' 00:07:14.516 09:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.516 09:44:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.775 09:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:14.775 09:44:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:14.775 [2024-12-06 09:44:40.003056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:15.715 09:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:15.715 09:44:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.715 09:44:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.715 09:44:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.715 09:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:15.715 09:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:15.715 09:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:15.715 09:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:15.715 09:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:15.715 09:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:15.715 09:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:15.715 09:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.715 09:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.715 09:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.715 09:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.715 09:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.715 09:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.715 09:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.715 09:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:15.715 09:44:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.715 09:44:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.715 09:44:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.975 09:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.975 "name": "raid_bdev1", 00:07:15.975 "uuid": "28527599-f403-4308-8fda-cd2fcae00739", 00:07:15.975 "strip_size_kb": 64, 00:07:15.975 "state": "online", 00:07:15.975 "raid_level": "raid0", 00:07:15.975 "superblock": true, 00:07:15.975 "num_base_bdevs": 2, 00:07:15.975 "num_base_bdevs_discovered": 2, 00:07:15.975 "num_base_bdevs_operational": 2, 00:07:15.975 "base_bdevs_list": [ 00:07:15.975 { 00:07:15.975 "name": "BaseBdev1", 00:07:15.975 "uuid": "70a3cafd-6be2-57d3-9ba8-26a8dcd34c7a", 00:07:15.975 "is_configured": true, 00:07:15.975 "data_offset": 2048, 00:07:15.975 "data_size": 63488 00:07:15.975 }, 00:07:15.975 { 00:07:15.975 "name": "BaseBdev2", 00:07:15.975 "uuid": "36eeaa00-64b6-56c8-bf60-260e42825810", 00:07:15.975 "is_configured": true, 00:07:15.975 "data_offset": 2048, 00:07:15.975 "data_size": 63488 00:07:15.975 } 00:07:15.975 ] 00:07:15.975 }' 00:07:15.975 09:44:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.975 09:44:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.233 09:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:16.234 09:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.234 09:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.234 [2024-12-06 09:44:41.363159] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:16.234 [2024-12-06 09:44:41.363259] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:16.234 [2024-12-06 09:44:41.366088] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:16.234 [2024-12-06 09:44:41.366204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:16.234 [2024-12-06 09:44:41.366261] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:16.234 [2024-12-06 09:44:41.366308] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:16.234 { 00:07:16.234 "results": [ 00:07:16.234 { 00:07:16.234 "job": "raid_bdev1", 00:07:16.234 "core_mask": "0x1", 00:07:16.234 "workload": "randrw", 00:07:16.234 "percentage": 50, 00:07:16.234 "status": "finished", 00:07:16.234 "queue_depth": 1, 00:07:16.234 "io_size": 131072, 00:07:16.234 "runtime": 1.361095, 00:07:16.234 "iops": 15650.634231997032, 00:07:16.234 "mibps": 1956.329278999629, 00:07:16.234 "io_failed": 1, 00:07:16.234 "io_timeout": 0, 00:07:16.234 "avg_latency_us": 88.6066854474645, 00:07:16.234 "min_latency_us": 27.276855895196505, 00:07:16.234 "max_latency_us": 1638.4 00:07:16.234 } 00:07:16.234 ], 00:07:16.234 "core_count": 1 00:07:16.234 } 00:07:16.234 09:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.234 09:44:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61391 00:07:16.234 09:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61391 ']' 00:07:16.234 09:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61391 00:07:16.234 09:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:16.234 09:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.234 09:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61391 00:07:16.234 09:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.234 killing process with pid 61391 00:07:16.234 09:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.234 09:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61391' 00:07:16.234 09:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61391 00:07:16.234 [2024-12-06 09:44:41.411357] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:16.234 09:44:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61391 00:07:16.493 [2024-12-06 09:44:41.544500] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:17.870 09:44:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.RbZoUdVGyj 00:07:17.870 09:44:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:17.870 09:44:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:17.870 09:44:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:17.870 09:44:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:17.870 ************************************ 00:07:17.870 END TEST raid_read_error_test 00:07:17.871 ************************************ 00:07:17.871 09:44:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:17.871 09:44:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:17.871 09:44:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:17.871 00:07:17.871 real 0m4.305s 00:07:17.871 user 0m5.176s 00:07:17.871 sys 0m0.476s 00:07:17.871 09:44:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.871 09:44:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.871 09:44:42 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:17.871 09:44:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:17.871 09:44:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.871 09:44:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:17.871 ************************************ 00:07:17.871 START TEST raid_write_error_test 00:07:17.871 ************************************ 00:07:17.871 09:44:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:07:17.871 09:44:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:17.871 09:44:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:17.871 09:44:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:17.871 09:44:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:17.871 09:44:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:17.871 09:44:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:17.871 09:44:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:17.871 09:44:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:17.871 09:44:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:17.871 09:44:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:17.871 09:44:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:17.871 09:44:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:17.871 09:44:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:17.871 09:44:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:17.871 09:44:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:17.871 09:44:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:17.871 09:44:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:17.871 09:44:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:17.871 09:44:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:17.871 09:44:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:17.871 09:44:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:17.871 09:44:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:17.871 09:44:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.F03DGrneEr 00:07:17.871 09:44:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:17.871 09:44:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61532 00:07:17.871 09:44:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61532 00:07:17.871 09:44:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61532 ']' 00:07:17.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.871 09:44:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.871 09:44:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.871 09:44:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.871 09:44:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.871 09:44:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.871 [2024-12-06 09:44:42.895064] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:17.871 [2024-12-06 09:44:42.895271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61532 ] 00:07:17.871 [2024-12-06 09:44:43.081421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.130 [2024-12-06 09:44:43.198253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.130 [2024-12-06 09:44:43.397542] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:18.130 [2024-12-06 09:44:43.397715] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:18.707 09:44:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.707 09:44:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:18.707 09:44:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.708 BaseBdev1_malloc 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.708 true 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.708 [2024-12-06 09:44:43.781191] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:18.708 [2024-12-06 09:44:43.781299] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:18.708 [2024-12-06 09:44:43.781339] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:18.708 [2024-12-06 09:44:43.781371] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:18.708 [2024-12-06 09:44:43.783460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:18.708 [2024-12-06 09:44:43.783556] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:18.708 BaseBdev1 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.708 BaseBdev2_malloc 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.708 true 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.708 [2024-12-06 09:44:43.838682] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:18.708 [2024-12-06 09:44:43.838784] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:18.708 [2024-12-06 09:44:43.838831] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:18.708 [2024-12-06 09:44:43.838864] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:18.708 [2024-12-06 09:44:43.840973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:18.708 [2024-12-06 09:44:43.841051] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:18.708 BaseBdev2 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.708 [2024-12-06 09:44:43.846725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:18.708 [2024-12-06 09:44:43.848497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:18.708 [2024-12-06 09:44:43.848680] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:18.708 [2024-12-06 09:44:43.848696] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:18.708 [2024-12-06 09:44:43.848930] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:18.708 [2024-12-06 09:44:43.849089] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:18.708 [2024-12-06 09:44:43.849101] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:18.708 [2024-12-06 09:44:43.849305] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.708 "name": "raid_bdev1", 00:07:18.708 "uuid": "d0d47de1-7a73-4075-a6a2-6a57c1cad167", 00:07:18.708 "strip_size_kb": 64, 00:07:18.708 "state": "online", 00:07:18.708 "raid_level": "raid0", 00:07:18.708 "superblock": true, 00:07:18.708 "num_base_bdevs": 2, 00:07:18.708 "num_base_bdevs_discovered": 2, 00:07:18.708 "num_base_bdevs_operational": 2, 00:07:18.708 "base_bdevs_list": [ 00:07:18.708 { 00:07:18.708 "name": "BaseBdev1", 00:07:18.708 "uuid": "ae30a907-22e4-5e19-afc5-8d7068720f7f", 00:07:18.708 "is_configured": true, 00:07:18.708 "data_offset": 2048, 00:07:18.708 "data_size": 63488 00:07:18.708 }, 00:07:18.708 { 00:07:18.708 "name": "BaseBdev2", 00:07:18.708 "uuid": "f0c3418a-06e3-53d4-be22-ee9a77344746", 00:07:18.708 "is_configured": true, 00:07:18.708 "data_offset": 2048, 00:07:18.708 "data_size": 63488 00:07:18.708 } 00:07:18.708 ] 00:07:18.708 }' 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.708 09:44:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.278 09:44:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:19.278 09:44:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:19.278 [2024-12-06 09:44:44.399010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:20.214 09:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:20.214 09:44:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.214 09:44:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.214 09:44:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.214 09:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:20.214 09:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:20.214 09:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:20.214 09:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:20.214 09:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:20.214 09:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:20.214 09:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:20.214 09:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.214 09:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:20.214 09:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.214 09:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.214 09:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.214 09:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.214 09:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.214 09:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:20.214 09:44:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.214 09:44:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.214 09:44:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.214 09:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.214 "name": "raid_bdev1", 00:07:20.214 "uuid": "d0d47de1-7a73-4075-a6a2-6a57c1cad167", 00:07:20.214 "strip_size_kb": 64, 00:07:20.214 "state": "online", 00:07:20.214 "raid_level": "raid0", 00:07:20.214 "superblock": true, 00:07:20.214 "num_base_bdevs": 2, 00:07:20.214 "num_base_bdevs_discovered": 2, 00:07:20.214 "num_base_bdevs_operational": 2, 00:07:20.214 "base_bdevs_list": [ 00:07:20.214 { 00:07:20.214 "name": "BaseBdev1", 00:07:20.214 "uuid": "ae30a907-22e4-5e19-afc5-8d7068720f7f", 00:07:20.214 "is_configured": true, 00:07:20.214 "data_offset": 2048, 00:07:20.214 "data_size": 63488 00:07:20.214 }, 00:07:20.214 { 00:07:20.214 "name": "BaseBdev2", 00:07:20.214 "uuid": "f0c3418a-06e3-53d4-be22-ee9a77344746", 00:07:20.214 "is_configured": true, 00:07:20.214 "data_offset": 2048, 00:07:20.214 "data_size": 63488 00:07:20.214 } 00:07:20.214 ] 00:07:20.214 }' 00:07:20.214 09:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.214 09:44:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.782 09:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:20.782 09:44:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.782 09:44:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.782 [2024-12-06 09:44:45.811133] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:20.782 [2024-12-06 09:44:45.811228] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:20.782 [2024-12-06 09:44:45.813907] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:20.782 [2024-12-06 09:44:45.813991] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:20.782 [2024-12-06 09:44:45.814040] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:20.782 [2024-12-06 09:44:45.814081] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:20.782 { 00:07:20.782 "results": [ 00:07:20.782 { 00:07:20.782 "job": "raid_bdev1", 00:07:20.782 "core_mask": "0x1", 00:07:20.782 "workload": "randrw", 00:07:20.782 "percentage": 50, 00:07:20.782 "status": "finished", 00:07:20.782 "queue_depth": 1, 00:07:20.782 "io_size": 131072, 00:07:20.782 "runtime": 1.413081, 00:07:20.782 "iops": 15609.862421191709, 00:07:20.782 "mibps": 1951.2328026489636, 00:07:20.782 "io_failed": 1, 00:07:20.782 "io_timeout": 0, 00:07:20.782 "avg_latency_us": 88.91876731536365, 00:07:20.782 "min_latency_us": 26.606113537117903, 00:07:20.782 "max_latency_us": 1409.4532751091704 00:07:20.782 } 00:07:20.782 ], 00:07:20.782 "core_count": 1 00:07:20.782 } 00:07:20.782 09:44:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.782 09:44:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61532 00:07:20.782 09:44:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61532 ']' 00:07:20.782 09:44:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61532 00:07:20.782 09:44:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:20.782 09:44:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.782 09:44:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61532 00:07:20.782 killing process with pid 61532 00:07:20.782 09:44:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.782 09:44:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.782 09:44:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61532' 00:07:20.782 09:44:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61532 00:07:20.782 [2024-12-06 09:44:45.856814] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:20.782 09:44:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61532 00:07:20.782 [2024-12-06 09:44:45.996363] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:22.164 09:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.F03DGrneEr 00:07:22.164 09:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:22.164 09:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:22.164 ************************************ 00:07:22.164 END TEST raid_write_error_test 00:07:22.164 ************************************ 00:07:22.164 09:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:07:22.164 09:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:22.164 09:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:22.164 09:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:22.164 09:44:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:07:22.164 00:07:22.164 real 0m4.404s 00:07:22.164 user 0m5.309s 00:07:22.164 sys 0m0.547s 00:07:22.164 09:44:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.164 09:44:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.164 09:44:47 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:22.164 09:44:47 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:22.164 09:44:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:22.164 09:44:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.164 09:44:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:22.164 ************************************ 00:07:22.164 START TEST raid_state_function_test 00:07:22.164 ************************************ 00:07:22.164 09:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:07:22.164 09:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:22.164 09:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:22.164 09:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:22.164 09:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:22.164 09:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:22.164 09:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:22.164 09:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:22.164 09:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:22.164 09:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:22.164 09:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:22.164 09:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:22.164 09:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:22.164 09:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:22.164 09:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:22.164 09:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:22.164 09:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:22.164 09:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:22.164 09:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:22.164 09:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:22.164 09:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:22.164 09:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:22.164 09:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:22.164 09:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:22.164 09:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61676 00:07:22.164 Process raid pid: 61676 00:07:22.164 09:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:22.164 09:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61676' 00:07:22.164 09:44:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61676 00:07:22.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.164 09:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61676 ']' 00:07:22.164 09:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.164 09:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.164 09:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.164 09:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.164 09:44:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.164 [2024-12-06 09:44:47.363106] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:22.164 [2024-12-06 09:44:47.363326] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.424 [2024-12-06 09:44:47.541911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.424 [2024-12-06 09:44:47.656659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.683 [2024-12-06 09:44:47.859312] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:22.683 [2024-12-06 09:44:47.859356] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:22.943 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.943 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:22.943 09:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:22.943 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.943 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.943 [2024-12-06 09:44:48.206461] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:22.943 [2024-12-06 09:44:48.206578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:22.943 [2024-12-06 09:44:48.206616] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:22.943 [2024-12-06 09:44:48.206644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:22.943 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.943 09:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:22.943 09:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:22.943 09:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:22.943 09:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:22.943 09:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.943 09:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.943 09:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.943 09:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.943 09:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.203 09:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.203 09:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.203 09:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.203 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.203 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.203 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.203 09:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.203 "name": "Existed_Raid", 00:07:23.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.203 "strip_size_kb": 64, 00:07:23.203 "state": "configuring", 00:07:23.203 "raid_level": "concat", 00:07:23.203 "superblock": false, 00:07:23.203 "num_base_bdevs": 2, 00:07:23.203 "num_base_bdevs_discovered": 0, 00:07:23.203 "num_base_bdevs_operational": 2, 00:07:23.203 "base_bdevs_list": [ 00:07:23.203 { 00:07:23.203 "name": "BaseBdev1", 00:07:23.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.203 "is_configured": false, 00:07:23.203 "data_offset": 0, 00:07:23.203 "data_size": 0 00:07:23.203 }, 00:07:23.203 { 00:07:23.203 "name": "BaseBdev2", 00:07:23.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.203 "is_configured": false, 00:07:23.203 "data_offset": 0, 00:07:23.203 "data_size": 0 00:07:23.203 } 00:07:23.203 ] 00:07:23.203 }' 00:07:23.203 09:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.203 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.462 09:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:23.462 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.462 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.462 [2024-12-06 09:44:48.637663] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:23.462 [2024-12-06 09:44:48.637742] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:23.462 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.462 09:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:23.462 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.462 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.462 [2024-12-06 09:44:48.649643] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:23.462 [2024-12-06 09:44:48.649739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:23.462 [2024-12-06 09:44:48.649766] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:23.462 [2024-12-06 09:44:48.649790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:23.462 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.462 09:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:23.462 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.462 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.462 [2024-12-06 09:44:48.696022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:23.462 BaseBdev1 00:07:23.462 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.462 09:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:23.462 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:23.462 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:23.462 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:23.462 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:23.462 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:23.462 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:23.462 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.462 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.462 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.462 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:23.462 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.463 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.463 [ 00:07:23.463 { 00:07:23.463 "name": "BaseBdev1", 00:07:23.463 "aliases": [ 00:07:23.463 "a3b24d26-3337-4747-8418-50c4ff5c072a" 00:07:23.463 ], 00:07:23.463 "product_name": "Malloc disk", 00:07:23.463 "block_size": 512, 00:07:23.463 "num_blocks": 65536, 00:07:23.463 "uuid": "a3b24d26-3337-4747-8418-50c4ff5c072a", 00:07:23.463 "assigned_rate_limits": { 00:07:23.463 "rw_ios_per_sec": 0, 00:07:23.463 "rw_mbytes_per_sec": 0, 00:07:23.463 "r_mbytes_per_sec": 0, 00:07:23.463 "w_mbytes_per_sec": 0 00:07:23.463 }, 00:07:23.463 "claimed": true, 00:07:23.463 "claim_type": "exclusive_write", 00:07:23.463 "zoned": false, 00:07:23.463 "supported_io_types": { 00:07:23.463 "read": true, 00:07:23.463 "write": true, 00:07:23.463 "unmap": true, 00:07:23.463 "flush": true, 00:07:23.463 "reset": true, 00:07:23.463 "nvme_admin": false, 00:07:23.463 "nvme_io": false, 00:07:23.463 "nvme_io_md": false, 00:07:23.463 "write_zeroes": true, 00:07:23.463 "zcopy": true, 00:07:23.463 "get_zone_info": false, 00:07:23.463 "zone_management": false, 00:07:23.463 "zone_append": false, 00:07:23.463 "compare": false, 00:07:23.463 "compare_and_write": false, 00:07:23.463 "abort": true, 00:07:23.463 "seek_hole": false, 00:07:23.463 "seek_data": false, 00:07:23.463 "copy": true, 00:07:23.463 "nvme_iov_md": false 00:07:23.463 }, 00:07:23.463 "memory_domains": [ 00:07:23.463 { 00:07:23.463 "dma_device_id": "system", 00:07:23.463 "dma_device_type": 1 00:07:23.463 }, 00:07:23.463 { 00:07:23.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.463 "dma_device_type": 2 00:07:23.463 } 00:07:23.463 ], 00:07:23.463 "driver_specific": {} 00:07:23.463 } 00:07:23.463 ] 00:07:23.463 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.463 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:23.463 09:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:23.463 09:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:23.721 09:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:23.721 09:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:23.721 09:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.721 09:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.721 09:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.722 09:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.722 09:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.722 09:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.722 09:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.722 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.722 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.722 09:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.722 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.722 09:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.722 "name": "Existed_Raid", 00:07:23.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.722 "strip_size_kb": 64, 00:07:23.722 "state": "configuring", 00:07:23.722 "raid_level": "concat", 00:07:23.722 "superblock": false, 00:07:23.722 "num_base_bdevs": 2, 00:07:23.722 "num_base_bdevs_discovered": 1, 00:07:23.722 "num_base_bdevs_operational": 2, 00:07:23.722 "base_bdevs_list": [ 00:07:23.722 { 00:07:23.722 "name": "BaseBdev1", 00:07:23.722 "uuid": "a3b24d26-3337-4747-8418-50c4ff5c072a", 00:07:23.722 "is_configured": true, 00:07:23.722 "data_offset": 0, 00:07:23.722 "data_size": 65536 00:07:23.722 }, 00:07:23.722 { 00:07:23.722 "name": "BaseBdev2", 00:07:23.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.722 "is_configured": false, 00:07:23.722 "data_offset": 0, 00:07:23.722 "data_size": 0 00:07:23.722 } 00:07:23.722 ] 00:07:23.722 }' 00:07:23.722 09:44:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.722 09:44:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.982 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:23.983 09:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.983 09:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.983 [2024-12-06 09:44:49.199243] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:23.983 [2024-12-06 09:44:49.199337] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:23.983 09:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.983 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:23.983 09:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.983 09:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.983 [2024-12-06 09:44:49.207268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:23.983 [2024-12-06 09:44:49.209119] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:23.983 [2024-12-06 09:44:49.209219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:23.983 09:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.983 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:23.983 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:23.983 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:23.983 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:23.983 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:23.983 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:23.983 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.983 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.983 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.983 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.983 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.983 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.983 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.983 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.983 09:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.983 09:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.983 09:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.243 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.243 "name": "Existed_Raid", 00:07:24.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.243 "strip_size_kb": 64, 00:07:24.243 "state": "configuring", 00:07:24.243 "raid_level": "concat", 00:07:24.243 "superblock": false, 00:07:24.243 "num_base_bdevs": 2, 00:07:24.243 "num_base_bdevs_discovered": 1, 00:07:24.243 "num_base_bdevs_operational": 2, 00:07:24.243 "base_bdevs_list": [ 00:07:24.243 { 00:07:24.243 "name": "BaseBdev1", 00:07:24.243 "uuid": "a3b24d26-3337-4747-8418-50c4ff5c072a", 00:07:24.243 "is_configured": true, 00:07:24.243 "data_offset": 0, 00:07:24.243 "data_size": 65536 00:07:24.243 }, 00:07:24.243 { 00:07:24.243 "name": "BaseBdev2", 00:07:24.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.243 "is_configured": false, 00:07:24.243 "data_offset": 0, 00:07:24.243 "data_size": 0 00:07:24.243 } 00:07:24.243 ] 00:07:24.243 }' 00:07:24.243 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.243 09:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.519 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:24.519 09:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.519 09:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.519 [2024-12-06 09:44:49.691215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:24.519 [2024-12-06 09:44:49.691345] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:24.519 [2024-12-06 09:44:49.691371] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:24.519 [2024-12-06 09:44:49.691671] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:24.519 [2024-12-06 09:44:49.691891] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:24.519 [2024-12-06 09:44:49.691938] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:24.519 [2024-12-06 09:44:49.692210] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.519 BaseBdev2 00:07:24.519 09:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.519 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:24.519 09:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:24.519 09:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:24.519 09:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:24.519 09:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:24.520 09:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:24.520 09:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:24.520 09:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.520 09:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.520 09:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.520 09:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:24.520 09:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.520 09:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.520 [ 00:07:24.520 { 00:07:24.520 "name": "BaseBdev2", 00:07:24.520 "aliases": [ 00:07:24.520 "dc22f28d-828d-4505-a140-8ff3051bc44c" 00:07:24.520 ], 00:07:24.520 "product_name": "Malloc disk", 00:07:24.520 "block_size": 512, 00:07:24.520 "num_blocks": 65536, 00:07:24.520 "uuid": "dc22f28d-828d-4505-a140-8ff3051bc44c", 00:07:24.520 "assigned_rate_limits": { 00:07:24.520 "rw_ios_per_sec": 0, 00:07:24.520 "rw_mbytes_per_sec": 0, 00:07:24.520 "r_mbytes_per_sec": 0, 00:07:24.520 "w_mbytes_per_sec": 0 00:07:24.520 }, 00:07:24.520 "claimed": true, 00:07:24.520 "claim_type": "exclusive_write", 00:07:24.520 "zoned": false, 00:07:24.520 "supported_io_types": { 00:07:24.520 "read": true, 00:07:24.520 "write": true, 00:07:24.520 "unmap": true, 00:07:24.520 "flush": true, 00:07:24.520 "reset": true, 00:07:24.520 "nvme_admin": false, 00:07:24.520 "nvme_io": false, 00:07:24.520 "nvme_io_md": false, 00:07:24.520 "write_zeroes": true, 00:07:24.520 "zcopy": true, 00:07:24.520 "get_zone_info": false, 00:07:24.520 "zone_management": false, 00:07:24.520 "zone_append": false, 00:07:24.520 "compare": false, 00:07:24.520 "compare_and_write": false, 00:07:24.520 "abort": true, 00:07:24.520 "seek_hole": false, 00:07:24.520 "seek_data": false, 00:07:24.520 "copy": true, 00:07:24.520 "nvme_iov_md": false 00:07:24.520 }, 00:07:24.520 "memory_domains": [ 00:07:24.520 { 00:07:24.520 "dma_device_id": "system", 00:07:24.520 "dma_device_type": 1 00:07:24.520 }, 00:07:24.520 { 00:07:24.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.520 "dma_device_type": 2 00:07:24.520 } 00:07:24.520 ], 00:07:24.520 "driver_specific": {} 00:07:24.520 } 00:07:24.520 ] 00:07:24.520 09:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.520 09:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:24.520 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:24.520 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:24.520 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:24.520 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:24.520 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:24.520 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:24.520 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.520 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:24.520 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.520 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.520 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.520 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.520 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.520 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.520 09:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.520 09:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.520 09:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.520 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.520 "name": "Existed_Raid", 00:07:24.520 "uuid": "77d96857-4c36-43e0-8b69-fc48206b6ffb", 00:07:24.520 "strip_size_kb": 64, 00:07:24.520 "state": "online", 00:07:24.520 "raid_level": "concat", 00:07:24.520 "superblock": false, 00:07:24.520 "num_base_bdevs": 2, 00:07:24.520 "num_base_bdevs_discovered": 2, 00:07:24.520 "num_base_bdevs_operational": 2, 00:07:24.520 "base_bdevs_list": [ 00:07:24.520 { 00:07:24.520 "name": "BaseBdev1", 00:07:24.520 "uuid": "a3b24d26-3337-4747-8418-50c4ff5c072a", 00:07:24.520 "is_configured": true, 00:07:24.520 "data_offset": 0, 00:07:24.520 "data_size": 65536 00:07:24.520 }, 00:07:24.520 { 00:07:24.520 "name": "BaseBdev2", 00:07:24.520 "uuid": "dc22f28d-828d-4505-a140-8ff3051bc44c", 00:07:24.520 "is_configured": true, 00:07:24.520 "data_offset": 0, 00:07:24.520 "data_size": 65536 00:07:24.520 } 00:07:24.520 ] 00:07:24.520 }' 00:07:24.520 09:44:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.520 09:44:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.091 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:25.091 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:25.091 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:25.091 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:25.091 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:25.091 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:25.091 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:25.091 09:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.091 09:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.091 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:25.091 [2024-12-06 09:44:50.154703] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:25.091 09:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.091 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:25.091 "name": "Existed_Raid", 00:07:25.091 "aliases": [ 00:07:25.091 "77d96857-4c36-43e0-8b69-fc48206b6ffb" 00:07:25.091 ], 00:07:25.091 "product_name": "Raid Volume", 00:07:25.091 "block_size": 512, 00:07:25.091 "num_blocks": 131072, 00:07:25.091 "uuid": "77d96857-4c36-43e0-8b69-fc48206b6ffb", 00:07:25.091 "assigned_rate_limits": { 00:07:25.091 "rw_ios_per_sec": 0, 00:07:25.091 "rw_mbytes_per_sec": 0, 00:07:25.091 "r_mbytes_per_sec": 0, 00:07:25.091 "w_mbytes_per_sec": 0 00:07:25.091 }, 00:07:25.091 "claimed": false, 00:07:25.091 "zoned": false, 00:07:25.091 "supported_io_types": { 00:07:25.091 "read": true, 00:07:25.091 "write": true, 00:07:25.091 "unmap": true, 00:07:25.091 "flush": true, 00:07:25.091 "reset": true, 00:07:25.091 "nvme_admin": false, 00:07:25.091 "nvme_io": false, 00:07:25.091 "nvme_io_md": false, 00:07:25.091 "write_zeroes": true, 00:07:25.091 "zcopy": false, 00:07:25.091 "get_zone_info": false, 00:07:25.091 "zone_management": false, 00:07:25.091 "zone_append": false, 00:07:25.091 "compare": false, 00:07:25.091 "compare_and_write": false, 00:07:25.091 "abort": false, 00:07:25.091 "seek_hole": false, 00:07:25.091 "seek_data": false, 00:07:25.091 "copy": false, 00:07:25.091 "nvme_iov_md": false 00:07:25.091 }, 00:07:25.091 "memory_domains": [ 00:07:25.091 { 00:07:25.091 "dma_device_id": "system", 00:07:25.091 "dma_device_type": 1 00:07:25.091 }, 00:07:25.091 { 00:07:25.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.091 "dma_device_type": 2 00:07:25.091 }, 00:07:25.091 { 00:07:25.091 "dma_device_id": "system", 00:07:25.091 "dma_device_type": 1 00:07:25.091 }, 00:07:25.091 { 00:07:25.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.091 "dma_device_type": 2 00:07:25.091 } 00:07:25.091 ], 00:07:25.091 "driver_specific": { 00:07:25.091 "raid": { 00:07:25.091 "uuid": "77d96857-4c36-43e0-8b69-fc48206b6ffb", 00:07:25.091 "strip_size_kb": 64, 00:07:25.091 "state": "online", 00:07:25.091 "raid_level": "concat", 00:07:25.091 "superblock": false, 00:07:25.091 "num_base_bdevs": 2, 00:07:25.091 "num_base_bdevs_discovered": 2, 00:07:25.091 "num_base_bdevs_operational": 2, 00:07:25.091 "base_bdevs_list": [ 00:07:25.091 { 00:07:25.091 "name": "BaseBdev1", 00:07:25.091 "uuid": "a3b24d26-3337-4747-8418-50c4ff5c072a", 00:07:25.091 "is_configured": true, 00:07:25.091 "data_offset": 0, 00:07:25.091 "data_size": 65536 00:07:25.091 }, 00:07:25.091 { 00:07:25.091 "name": "BaseBdev2", 00:07:25.091 "uuid": "dc22f28d-828d-4505-a140-8ff3051bc44c", 00:07:25.091 "is_configured": true, 00:07:25.091 "data_offset": 0, 00:07:25.091 "data_size": 65536 00:07:25.091 } 00:07:25.091 ] 00:07:25.091 } 00:07:25.091 } 00:07:25.091 }' 00:07:25.091 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:25.091 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:25.091 BaseBdev2' 00:07:25.091 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:25.091 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:25.091 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:25.091 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:25.091 09:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.091 09:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.091 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:25.091 09:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.091 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:25.091 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:25.091 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:25.091 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:25.091 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:25.091 09:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.091 09:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.091 09:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.349 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:25.349 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:25.349 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:25.349 09:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.349 09:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.349 [2024-12-06 09:44:50.386136] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:25.349 [2024-12-06 09:44:50.386243] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:25.349 [2024-12-06 09:44:50.386316] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:25.349 09:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.349 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:25.350 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:25.350 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:25.350 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:25.350 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:25.350 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:25.350 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:25.350 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:25.350 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:25.350 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.350 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:25.350 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.350 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.350 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.350 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.350 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.350 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.350 09:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.350 09:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.350 09:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.350 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.350 "name": "Existed_Raid", 00:07:25.350 "uuid": "77d96857-4c36-43e0-8b69-fc48206b6ffb", 00:07:25.350 "strip_size_kb": 64, 00:07:25.350 "state": "offline", 00:07:25.350 "raid_level": "concat", 00:07:25.350 "superblock": false, 00:07:25.350 "num_base_bdevs": 2, 00:07:25.350 "num_base_bdevs_discovered": 1, 00:07:25.350 "num_base_bdevs_operational": 1, 00:07:25.350 "base_bdevs_list": [ 00:07:25.350 { 00:07:25.350 "name": null, 00:07:25.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.350 "is_configured": false, 00:07:25.350 "data_offset": 0, 00:07:25.350 "data_size": 65536 00:07:25.350 }, 00:07:25.350 { 00:07:25.350 "name": "BaseBdev2", 00:07:25.350 "uuid": "dc22f28d-828d-4505-a140-8ff3051bc44c", 00:07:25.350 "is_configured": true, 00:07:25.350 "data_offset": 0, 00:07:25.350 "data_size": 65536 00:07:25.350 } 00:07:25.350 ] 00:07:25.350 }' 00:07:25.350 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.350 09:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.914 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:25.914 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:25.914 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.914 09:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.914 09:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.914 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:25.914 09:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.914 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:25.914 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:25.914 09:44:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:25.914 09:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.914 09:44:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.914 [2024-12-06 09:44:50.966191] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:25.914 [2024-12-06 09:44:50.966295] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:25.914 09:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.914 09:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:25.914 09:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:25.914 09:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:25.914 09:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.914 09:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.914 09:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.914 09:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.914 09:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:25.914 09:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:25.914 09:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:25.914 09:44:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61676 00:07:25.914 09:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61676 ']' 00:07:25.914 09:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61676 00:07:25.914 09:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:25.914 09:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.914 09:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61676 00:07:25.914 killing process with pid 61676 00:07:25.914 09:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.914 09:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.914 09:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61676' 00:07:25.914 09:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61676 00:07:25.914 [2024-12-06 09:44:51.134388] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:25.914 09:44:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61676 00:07:25.914 [2024-12-06 09:44:51.152738] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:27.288 00:07:27.288 real 0m5.011s 00:07:27.288 user 0m7.230s 00:07:27.288 sys 0m0.792s 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.288 ************************************ 00:07:27.288 END TEST raid_state_function_test 00:07:27.288 ************************************ 00:07:27.288 09:44:52 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:27.288 09:44:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:27.288 09:44:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.288 09:44:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:27.288 ************************************ 00:07:27.288 START TEST raid_state_function_test_sb 00:07:27.288 ************************************ 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61928 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61928' 00:07:27.288 Process raid pid: 61928 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61928 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61928 ']' 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.288 09:44:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.288 [2024-12-06 09:44:52.432521] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:27.288 [2024-12-06 09:44:52.432732] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:27.547 [2024-12-06 09:44:52.586696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.547 [2024-12-06 09:44:52.698032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.805 [2024-12-06 09:44:52.892626] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.805 [2024-12-06 09:44:52.892669] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.064 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.064 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:28.064 09:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:28.064 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.064 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.064 [2024-12-06 09:44:53.274537] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:28.064 [2024-12-06 09:44:53.274636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:28.064 [2024-12-06 09:44:53.274670] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:28.064 [2024-12-06 09:44:53.274694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:28.064 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.064 09:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:28.064 09:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:28.064 09:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:28.064 09:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:28.064 09:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.064 09:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.064 09:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.064 09:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.064 09:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.064 09:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.064 09:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.064 09:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.064 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.064 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.064 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.064 09:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.064 "name": "Existed_Raid", 00:07:28.064 "uuid": "2256048d-63a1-4f0d-81b8-71ef9b35ab33", 00:07:28.064 "strip_size_kb": 64, 00:07:28.064 "state": "configuring", 00:07:28.064 "raid_level": "concat", 00:07:28.064 "superblock": true, 00:07:28.064 "num_base_bdevs": 2, 00:07:28.064 "num_base_bdevs_discovered": 0, 00:07:28.064 "num_base_bdevs_operational": 2, 00:07:28.064 "base_bdevs_list": [ 00:07:28.064 { 00:07:28.064 "name": "BaseBdev1", 00:07:28.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.064 "is_configured": false, 00:07:28.064 "data_offset": 0, 00:07:28.064 "data_size": 0 00:07:28.064 }, 00:07:28.064 { 00:07:28.064 "name": "BaseBdev2", 00:07:28.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.064 "is_configured": false, 00:07:28.064 "data_offset": 0, 00:07:28.064 "data_size": 0 00:07:28.064 } 00:07:28.064 ] 00:07:28.064 }' 00:07:28.064 09:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.064 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.632 [2024-12-06 09:44:53.685782] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:28.632 [2024-12-06 09:44:53.685872] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.632 [2024-12-06 09:44:53.697764] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:28.632 [2024-12-06 09:44:53.697851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:28.632 [2024-12-06 09:44:53.697877] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:28.632 [2024-12-06 09:44:53.697902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.632 [2024-12-06 09:44:53.746191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:28.632 BaseBdev1 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.632 [ 00:07:28.632 { 00:07:28.632 "name": "BaseBdev1", 00:07:28.632 "aliases": [ 00:07:28.632 "00a1367f-40fe-4d52-b83c-1e9423cbdb8d" 00:07:28.632 ], 00:07:28.632 "product_name": "Malloc disk", 00:07:28.632 "block_size": 512, 00:07:28.632 "num_blocks": 65536, 00:07:28.632 "uuid": "00a1367f-40fe-4d52-b83c-1e9423cbdb8d", 00:07:28.632 "assigned_rate_limits": { 00:07:28.632 "rw_ios_per_sec": 0, 00:07:28.632 "rw_mbytes_per_sec": 0, 00:07:28.632 "r_mbytes_per_sec": 0, 00:07:28.632 "w_mbytes_per_sec": 0 00:07:28.632 }, 00:07:28.632 "claimed": true, 00:07:28.632 "claim_type": "exclusive_write", 00:07:28.632 "zoned": false, 00:07:28.632 "supported_io_types": { 00:07:28.632 "read": true, 00:07:28.632 "write": true, 00:07:28.632 "unmap": true, 00:07:28.632 "flush": true, 00:07:28.632 "reset": true, 00:07:28.632 "nvme_admin": false, 00:07:28.632 "nvme_io": false, 00:07:28.632 "nvme_io_md": false, 00:07:28.632 "write_zeroes": true, 00:07:28.632 "zcopy": true, 00:07:28.632 "get_zone_info": false, 00:07:28.632 "zone_management": false, 00:07:28.632 "zone_append": false, 00:07:28.632 "compare": false, 00:07:28.632 "compare_and_write": false, 00:07:28.632 "abort": true, 00:07:28.632 "seek_hole": false, 00:07:28.632 "seek_data": false, 00:07:28.632 "copy": true, 00:07:28.632 "nvme_iov_md": false 00:07:28.632 }, 00:07:28.632 "memory_domains": [ 00:07:28.632 { 00:07:28.632 "dma_device_id": "system", 00:07:28.632 "dma_device_type": 1 00:07:28.632 }, 00:07:28.632 { 00:07:28.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.632 "dma_device_type": 2 00:07:28.632 } 00:07:28.632 ], 00:07:28.632 "driver_specific": {} 00:07:28.632 } 00:07:28.632 ] 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.632 09:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.632 "name": "Existed_Raid", 00:07:28.632 "uuid": "48ed07d5-d386-44b6-bcd7-175d14d7834a", 00:07:28.632 "strip_size_kb": 64, 00:07:28.632 "state": "configuring", 00:07:28.632 "raid_level": "concat", 00:07:28.632 "superblock": true, 00:07:28.632 "num_base_bdevs": 2, 00:07:28.632 "num_base_bdevs_discovered": 1, 00:07:28.632 "num_base_bdevs_operational": 2, 00:07:28.632 "base_bdevs_list": [ 00:07:28.632 { 00:07:28.632 "name": "BaseBdev1", 00:07:28.632 "uuid": "00a1367f-40fe-4d52-b83c-1e9423cbdb8d", 00:07:28.632 "is_configured": true, 00:07:28.632 "data_offset": 2048, 00:07:28.632 "data_size": 63488 00:07:28.632 }, 00:07:28.632 { 00:07:28.633 "name": "BaseBdev2", 00:07:28.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.633 "is_configured": false, 00:07:28.633 "data_offset": 0, 00:07:28.633 "data_size": 0 00:07:28.633 } 00:07:28.633 ] 00:07:28.633 }' 00:07:28.633 09:44:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.633 09:44:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.202 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:29.202 09:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.202 09:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.202 [2024-12-06 09:44:54.185471] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:29.202 [2024-12-06 09:44:54.185568] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:29.202 09:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.202 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:29.202 09:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.202 09:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.202 [2024-12-06 09:44:54.197491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:29.202 [2024-12-06 09:44:54.199290] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:29.202 [2024-12-06 09:44:54.199364] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:29.202 09:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.202 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:29.202 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:29.202 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:29.202 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:29.202 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:29.202 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:29.202 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.202 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.202 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.202 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.202 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.202 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.202 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.202 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.202 09:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.202 09:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.202 09:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.202 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.202 "name": "Existed_Raid", 00:07:29.202 "uuid": "969e1fc4-40a9-4c8f-9ddb-46c740f6874f", 00:07:29.202 "strip_size_kb": 64, 00:07:29.202 "state": "configuring", 00:07:29.202 "raid_level": "concat", 00:07:29.202 "superblock": true, 00:07:29.202 "num_base_bdevs": 2, 00:07:29.202 "num_base_bdevs_discovered": 1, 00:07:29.202 "num_base_bdevs_operational": 2, 00:07:29.202 "base_bdevs_list": [ 00:07:29.202 { 00:07:29.202 "name": "BaseBdev1", 00:07:29.202 "uuid": "00a1367f-40fe-4d52-b83c-1e9423cbdb8d", 00:07:29.202 "is_configured": true, 00:07:29.202 "data_offset": 2048, 00:07:29.202 "data_size": 63488 00:07:29.202 }, 00:07:29.202 { 00:07:29.202 "name": "BaseBdev2", 00:07:29.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.202 "is_configured": false, 00:07:29.202 "data_offset": 0, 00:07:29.202 "data_size": 0 00:07:29.202 } 00:07:29.202 ] 00:07:29.202 }' 00:07:29.202 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.202 09:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.461 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:29.461 09:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.461 09:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.461 BaseBdev2 00:07:29.461 [2024-12-06 09:44:54.695440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:29.461 [2024-12-06 09:44:54.695695] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:29.461 [2024-12-06 09:44:54.695710] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:29.461 [2024-12-06 09:44:54.695986] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:29.461 [2024-12-06 09:44:54.696142] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:29.461 [2024-12-06 09:44:54.696176] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:29.461 [2024-12-06 09:44:54.696332] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.461 09:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.461 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:29.461 09:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:29.461 09:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:29.462 09:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:29.462 09:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:29.462 09:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:29.462 09:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:29.462 09:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.462 09:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.462 09:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.462 09:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:29.462 09:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.462 09:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.462 [ 00:07:29.462 { 00:07:29.462 "name": "BaseBdev2", 00:07:29.462 "aliases": [ 00:07:29.462 "af359f1b-9730-4565-a093-acf7cd049039" 00:07:29.462 ], 00:07:29.462 "product_name": "Malloc disk", 00:07:29.462 "block_size": 512, 00:07:29.462 "num_blocks": 65536, 00:07:29.462 "uuid": "af359f1b-9730-4565-a093-acf7cd049039", 00:07:29.462 "assigned_rate_limits": { 00:07:29.462 "rw_ios_per_sec": 0, 00:07:29.462 "rw_mbytes_per_sec": 0, 00:07:29.462 "r_mbytes_per_sec": 0, 00:07:29.462 "w_mbytes_per_sec": 0 00:07:29.462 }, 00:07:29.462 "claimed": true, 00:07:29.462 "claim_type": "exclusive_write", 00:07:29.462 "zoned": false, 00:07:29.462 "supported_io_types": { 00:07:29.462 "read": true, 00:07:29.462 "write": true, 00:07:29.462 "unmap": true, 00:07:29.462 "flush": true, 00:07:29.462 "reset": true, 00:07:29.462 "nvme_admin": false, 00:07:29.462 "nvme_io": false, 00:07:29.462 "nvme_io_md": false, 00:07:29.462 "write_zeroes": true, 00:07:29.462 "zcopy": true, 00:07:29.462 "get_zone_info": false, 00:07:29.462 "zone_management": false, 00:07:29.462 "zone_append": false, 00:07:29.462 "compare": false, 00:07:29.462 "compare_and_write": false, 00:07:29.462 "abort": true, 00:07:29.462 "seek_hole": false, 00:07:29.462 "seek_data": false, 00:07:29.462 "copy": true, 00:07:29.462 "nvme_iov_md": false 00:07:29.462 }, 00:07:29.462 "memory_domains": [ 00:07:29.462 { 00:07:29.462 "dma_device_id": "system", 00:07:29.462 "dma_device_type": 1 00:07:29.462 }, 00:07:29.462 { 00:07:29.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.462 "dma_device_type": 2 00:07:29.462 } 00:07:29.462 ], 00:07:29.462 "driver_specific": {} 00:07:29.462 } 00:07:29.462 ] 00:07:29.462 09:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.462 09:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:29.462 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:29.462 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:29.720 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:29.720 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:29.720 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:29.720 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:29.720 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.720 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.720 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.720 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.721 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.721 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.721 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.721 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.721 09:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.721 09:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.721 09:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.721 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.721 "name": "Existed_Raid", 00:07:29.721 "uuid": "969e1fc4-40a9-4c8f-9ddb-46c740f6874f", 00:07:29.721 "strip_size_kb": 64, 00:07:29.721 "state": "online", 00:07:29.721 "raid_level": "concat", 00:07:29.721 "superblock": true, 00:07:29.721 "num_base_bdevs": 2, 00:07:29.721 "num_base_bdevs_discovered": 2, 00:07:29.721 "num_base_bdevs_operational": 2, 00:07:29.721 "base_bdevs_list": [ 00:07:29.721 { 00:07:29.721 "name": "BaseBdev1", 00:07:29.721 "uuid": "00a1367f-40fe-4d52-b83c-1e9423cbdb8d", 00:07:29.721 "is_configured": true, 00:07:29.721 "data_offset": 2048, 00:07:29.721 "data_size": 63488 00:07:29.721 }, 00:07:29.721 { 00:07:29.721 "name": "BaseBdev2", 00:07:29.721 "uuid": "af359f1b-9730-4565-a093-acf7cd049039", 00:07:29.721 "is_configured": true, 00:07:29.721 "data_offset": 2048, 00:07:29.721 "data_size": 63488 00:07:29.721 } 00:07:29.721 ] 00:07:29.721 }' 00:07:29.721 09:44:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.721 09:44:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.979 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:29.979 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:29.979 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:29.979 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:29.979 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:29.979 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:29.979 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:29.979 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:29.979 09:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.979 09:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.979 [2024-12-06 09:44:55.115041] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:29.979 09:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.979 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:29.979 "name": "Existed_Raid", 00:07:29.979 "aliases": [ 00:07:29.979 "969e1fc4-40a9-4c8f-9ddb-46c740f6874f" 00:07:29.979 ], 00:07:29.979 "product_name": "Raid Volume", 00:07:29.979 "block_size": 512, 00:07:29.979 "num_blocks": 126976, 00:07:29.979 "uuid": "969e1fc4-40a9-4c8f-9ddb-46c740f6874f", 00:07:29.979 "assigned_rate_limits": { 00:07:29.979 "rw_ios_per_sec": 0, 00:07:29.979 "rw_mbytes_per_sec": 0, 00:07:29.979 "r_mbytes_per_sec": 0, 00:07:29.979 "w_mbytes_per_sec": 0 00:07:29.979 }, 00:07:29.979 "claimed": false, 00:07:29.979 "zoned": false, 00:07:29.979 "supported_io_types": { 00:07:29.979 "read": true, 00:07:29.979 "write": true, 00:07:29.979 "unmap": true, 00:07:29.979 "flush": true, 00:07:29.979 "reset": true, 00:07:29.979 "nvme_admin": false, 00:07:29.979 "nvme_io": false, 00:07:29.979 "nvme_io_md": false, 00:07:29.979 "write_zeroes": true, 00:07:29.979 "zcopy": false, 00:07:29.979 "get_zone_info": false, 00:07:29.979 "zone_management": false, 00:07:29.979 "zone_append": false, 00:07:29.979 "compare": false, 00:07:29.979 "compare_and_write": false, 00:07:29.979 "abort": false, 00:07:29.979 "seek_hole": false, 00:07:29.979 "seek_data": false, 00:07:29.979 "copy": false, 00:07:29.979 "nvme_iov_md": false 00:07:29.979 }, 00:07:29.979 "memory_domains": [ 00:07:29.979 { 00:07:29.979 "dma_device_id": "system", 00:07:29.979 "dma_device_type": 1 00:07:29.979 }, 00:07:29.979 { 00:07:29.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.979 "dma_device_type": 2 00:07:29.979 }, 00:07:29.979 { 00:07:29.979 "dma_device_id": "system", 00:07:29.979 "dma_device_type": 1 00:07:29.979 }, 00:07:29.979 { 00:07:29.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.979 "dma_device_type": 2 00:07:29.979 } 00:07:29.979 ], 00:07:29.979 "driver_specific": { 00:07:29.979 "raid": { 00:07:29.979 "uuid": "969e1fc4-40a9-4c8f-9ddb-46c740f6874f", 00:07:29.979 "strip_size_kb": 64, 00:07:29.979 "state": "online", 00:07:29.979 "raid_level": "concat", 00:07:29.979 "superblock": true, 00:07:29.979 "num_base_bdevs": 2, 00:07:29.979 "num_base_bdevs_discovered": 2, 00:07:29.979 "num_base_bdevs_operational": 2, 00:07:29.979 "base_bdevs_list": [ 00:07:29.979 { 00:07:29.979 "name": "BaseBdev1", 00:07:29.979 "uuid": "00a1367f-40fe-4d52-b83c-1e9423cbdb8d", 00:07:29.979 "is_configured": true, 00:07:29.979 "data_offset": 2048, 00:07:29.979 "data_size": 63488 00:07:29.979 }, 00:07:29.979 { 00:07:29.979 "name": "BaseBdev2", 00:07:29.979 "uuid": "af359f1b-9730-4565-a093-acf7cd049039", 00:07:29.979 "is_configured": true, 00:07:29.979 "data_offset": 2048, 00:07:29.979 "data_size": 63488 00:07:29.979 } 00:07:29.979 ] 00:07:29.979 } 00:07:29.979 } 00:07:29.979 }' 00:07:29.979 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:29.979 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:29.979 BaseBdev2' 00:07:29.979 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.979 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:29.979 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:29.979 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:29.979 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.979 09:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.979 09:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.237 09:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.238 [2024-12-06 09:44:55.338415] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:30.238 [2024-12-06 09:44:55.338490] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:30.238 [2024-12-06 09:44:55.338562] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.238 "name": "Existed_Raid", 00:07:30.238 "uuid": "969e1fc4-40a9-4c8f-9ddb-46c740f6874f", 00:07:30.238 "strip_size_kb": 64, 00:07:30.238 "state": "offline", 00:07:30.238 "raid_level": "concat", 00:07:30.238 "superblock": true, 00:07:30.238 "num_base_bdevs": 2, 00:07:30.238 "num_base_bdevs_discovered": 1, 00:07:30.238 "num_base_bdevs_operational": 1, 00:07:30.238 "base_bdevs_list": [ 00:07:30.238 { 00:07:30.238 "name": null, 00:07:30.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.238 "is_configured": false, 00:07:30.238 "data_offset": 0, 00:07:30.238 "data_size": 63488 00:07:30.238 }, 00:07:30.238 { 00:07:30.238 "name": "BaseBdev2", 00:07:30.238 "uuid": "af359f1b-9730-4565-a093-acf7cd049039", 00:07:30.238 "is_configured": true, 00:07:30.238 "data_offset": 2048, 00:07:30.238 "data_size": 63488 00:07:30.238 } 00:07:30.238 ] 00:07:30.238 }' 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.238 09:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.806 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:30.806 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:30.806 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.806 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:30.806 09:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.806 09:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.806 09:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.806 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:30.806 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:30.806 09:44:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:30.806 09:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.806 09:44:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.806 [2024-12-06 09:44:55.927325] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:30.806 [2024-12-06 09:44:55.927421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:30.806 09:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.806 09:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:30.806 09:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:30.806 09:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.806 09:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.806 09:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:30.806 09:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.806 09:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.806 09:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:30.806 09:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:30.806 09:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:31.065 09:44:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61928 00:07:31.065 09:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61928 ']' 00:07:31.065 09:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61928 00:07:31.065 09:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:31.065 09:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:31.065 09:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61928 00:07:31.065 09:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:31.065 09:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:31.065 09:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61928' 00:07:31.065 killing process with pid 61928 00:07:31.065 09:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61928 00:07:31.065 [2024-12-06 09:44:56.119864] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:31.065 09:44:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61928 00:07:31.065 [2024-12-06 09:44:56.136711] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:32.044 09:44:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:32.044 00:07:32.044 real 0m4.933s 00:07:32.044 user 0m7.087s 00:07:32.044 sys 0m0.802s 00:07:32.044 09:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.044 09:44:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.044 ************************************ 00:07:32.044 END TEST raid_state_function_test_sb 00:07:32.044 ************************************ 00:07:32.314 09:44:57 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:32.314 09:44:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:32.314 09:44:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.314 09:44:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:32.314 ************************************ 00:07:32.314 START TEST raid_superblock_test 00:07:32.314 ************************************ 00:07:32.314 09:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:07:32.314 09:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:32.314 09:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:32.314 09:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:32.314 09:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:32.314 09:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:32.314 09:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:32.314 09:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:32.314 09:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:32.314 09:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:32.314 09:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:32.314 09:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:32.314 09:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:32.314 09:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:32.314 09:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:32.314 09:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:32.314 09:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:32.314 09:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62176 00:07:32.314 09:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:32.314 09:44:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62176 00:07:32.314 09:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62176 ']' 00:07:32.314 09:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.314 09:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.314 09:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.314 09:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.314 09:44:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.314 [2024-12-06 09:44:57.421618] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:32.314 [2024-12-06 09:44:57.421741] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62176 ] 00:07:32.572 [2024-12-06 09:44:57.600126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.573 [2024-12-06 09:44:57.713674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.831 [2024-12-06 09:44:57.916996] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.831 [2024-12-06 09:44:57.917061] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.090 09:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.090 09:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:33.090 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:33.090 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:33.090 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:33.090 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:33.090 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:33.090 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:33.090 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:33.090 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:33.090 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:33.090 09:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.090 09:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.090 malloc1 00:07:33.090 09:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.090 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:33.090 09:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.090 09:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.090 [2024-12-06 09:44:58.300878] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:33.090 [2024-12-06 09:44:58.301006] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.090 [2024-12-06 09:44:58.301045] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:33.090 [2024-12-06 09:44:58.301073] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.090 [2024-12-06 09:44:58.303122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.090 [2024-12-06 09:44:58.303201] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:33.090 pt1 00:07:33.090 09:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.090 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:33.090 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:33.090 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:33.090 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:33.090 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:33.090 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:33.090 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:33.090 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:33.090 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:33.090 09:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.090 09:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.090 malloc2 00:07:33.090 09:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.090 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:33.090 09:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.090 09:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.090 [2024-12-06 09:44:58.356985] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:33.090 [2024-12-06 09:44:58.357081] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.090 [2024-12-06 09:44:58.357124] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:33.090 [2024-12-06 09:44:58.357166] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.090 [2024-12-06 09:44:58.359129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.090 [2024-12-06 09:44:58.359210] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:33.090 pt2 00:07:33.349 09:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.349 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:33.349 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:33.349 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:33.349 09:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.349 09:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.349 [2024-12-06 09:44:58.369021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:33.349 [2024-12-06 09:44:58.370777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:33.349 [2024-12-06 09:44:58.370964] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:33.349 [2024-12-06 09:44:58.371009] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:33.349 [2024-12-06 09:44:58.371273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:33.349 [2024-12-06 09:44:58.371456] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:33.349 [2024-12-06 09:44:58.371498] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:33.349 [2024-12-06 09:44:58.371702] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.349 09:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.349 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:33.349 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:33.349 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:33.349 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:33.349 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.349 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.349 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.349 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.349 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.349 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.349 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.349 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:33.349 09:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.349 09:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.349 09:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.349 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.349 "name": "raid_bdev1", 00:07:33.349 "uuid": "51107dc9-1eac-4ee8-a838-6d6784e76ad8", 00:07:33.349 "strip_size_kb": 64, 00:07:33.349 "state": "online", 00:07:33.349 "raid_level": "concat", 00:07:33.349 "superblock": true, 00:07:33.349 "num_base_bdevs": 2, 00:07:33.349 "num_base_bdevs_discovered": 2, 00:07:33.349 "num_base_bdevs_operational": 2, 00:07:33.349 "base_bdevs_list": [ 00:07:33.349 { 00:07:33.349 "name": "pt1", 00:07:33.350 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:33.350 "is_configured": true, 00:07:33.350 "data_offset": 2048, 00:07:33.350 "data_size": 63488 00:07:33.350 }, 00:07:33.350 { 00:07:33.350 "name": "pt2", 00:07:33.350 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:33.350 "is_configured": true, 00:07:33.350 "data_offset": 2048, 00:07:33.350 "data_size": 63488 00:07:33.350 } 00:07:33.350 ] 00:07:33.350 }' 00:07:33.350 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.350 09:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.609 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:33.609 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:33.609 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:33.609 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:33.609 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:33.609 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:33.609 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:33.609 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:33.609 09:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.609 09:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.609 [2024-12-06 09:44:58.796561] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.609 09:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.609 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:33.609 "name": "raid_bdev1", 00:07:33.609 "aliases": [ 00:07:33.609 "51107dc9-1eac-4ee8-a838-6d6784e76ad8" 00:07:33.609 ], 00:07:33.609 "product_name": "Raid Volume", 00:07:33.609 "block_size": 512, 00:07:33.609 "num_blocks": 126976, 00:07:33.609 "uuid": "51107dc9-1eac-4ee8-a838-6d6784e76ad8", 00:07:33.609 "assigned_rate_limits": { 00:07:33.609 "rw_ios_per_sec": 0, 00:07:33.609 "rw_mbytes_per_sec": 0, 00:07:33.609 "r_mbytes_per_sec": 0, 00:07:33.609 "w_mbytes_per_sec": 0 00:07:33.609 }, 00:07:33.609 "claimed": false, 00:07:33.609 "zoned": false, 00:07:33.609 "supported_io_types": { 00:07:33.609 "read": true, 00:07:33.609 "write": true, 00:07:33.609 "unmap": true, 00:07:33.609 "flush": true, 00:07:33.609 "reset": true, 00:07:33.609 "nvme_admin": false, 00:07:33.609 "nvme_io": false, 00:07:33.609 "nvme_io_md": false, 00:07:33.609 "write_zeroes": true, 00:07:33.609 "zcopy": false, 00:07:33.609 "get_zone_info": false, 00:07:33.609 "zone_management": false, 00:07:33.609 "zone_append": false, 00:07:33.609 "compare": false, 00:07:33.609 "compare_and_write": false, 00:07:33.609 "abort": false, 00:07:33.609 "seek_hole": false, 00:07:33.609 "seek_data": false, 00:07:33.609 "copy": false, 00:07:33.609 "nvme_iov_md": false 00:07:33.609 }, 00:07:33.609 "memory_domains": [ 00:07:33.609 { 00:07:33.609 "dma_device_id": "system", 00:07:33.609 "dma_device_type": 1 00:07:33.609 }, 00:07:33.609 { 00:07:33.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.609 "dma_device_type": 2 00:07:33.609 }, 00:07:33.609 { 00:07:33.609 "dma_device_id": "system", 00:07:33.609 "dma_device_type": 1 00:07:33.609 }, 00:07:33.609 { 00:07:33.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.609 "dma_device_type": 2 00:07:33.609 } 00:07:33.609 ], 00:07:33.609 "driver_specific": { 00:07:33.609 "raid": { 00:07:33.609 "uuid": "51107dc9-1eac-4ee8-a838-6d6784e76ad8", 00:07:33.609 "strip_size_kb": 64, 00:07:33.609 "state": "online", 00:07:33.609 "raid_level": "concat", 00:07:33.609 "superblock": true, 00:07:33.609 "num_base_bdevs": 2, 00:07:33.610 "num_base_bdevs_discovered": 2, 00:07:33.610 "num_base_bdevs_operational": 2, 00:07:33.610 "base_bdevs_list": [ 00:07:33.610 { 00:07:33.610 "name": "pt1", 00:07:33.610 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:33.610 "is_configured": true, 00:07:33.610 "data_offset": 2048, 00:07:33.610 "data_size": 63488 00:07:33.610 }, 00:07:33.610 { 00:07:33.610 "name": "pt2", 00:07:33.610 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:33.610 "is_configured": true, 00:07:33.610 "data_offset": 2048, 00:07:33.610 "data_size": 63488 00:07:33.610 } 00:07:33.610 ] 00:07:33.610 } 00:07:33.610 } 00:07:33.610 }' 00:07:33.610 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:33.610 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:33.610 pt2' 00:07:33.610 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.869 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:33.869 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:33.869 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:33.869 09:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.869 09:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.869 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.869 09:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.869 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:33.869 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:33.869 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:33.869 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:33.869 09:44:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:33.869 09:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.869 09:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.869 09:44:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.869 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:33.869 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:33.869 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:33.869 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.869 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.869 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:33.869 [2024-12-06 09:44:59.028131] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.869 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.869 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=51107dc9-1eac-4ee8-a838-6d6784e76ad8 00:07:33.869 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 51107dc9-1eac-4ee8-a838-6d6784e76ad8 ']' 00:07:33.869 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:33.869 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.869 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.869 [2024-12-06 09:44:59.075802] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:33.869 [2024-12-06 09:44:59.075873] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:33.869 [2024-12-06 09:44:59.075979] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:33.869 [2024-12-06 09:44:59.076060] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:33.869 [2024-12-06 09:44:59.076108] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:33.869 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.869 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.869 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.869 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.869 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:33.869 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.869 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:33.869 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:33.869 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:33.869 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:33.869 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.869 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.128 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.128 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:34.128 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:34.128 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.128 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.128 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.128 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:34.128 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.128 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.129 [2024-12-06 09:44:59.219567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:34.129 [2024-12-06 09:44:59.221513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:34.129 [2024-12-06 09:44:59.221616] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:34.129 [2024-12-06 09:44:59.221716] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:34.129 [2024-12-06 09:44:59.221757] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:34.129 [2024-12-06 09:44:59.221780] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:34.129 request: 00:07:34.129 { 00:07:34.129 "name": "raid_bdev1", 00:07:34.129 "raid_level": "concat", 00:07:34.129 "base_bdevs": [ 00:07:34.129 "malloc1", 00:07:34.129 "malloc2" 00:07:34.129 ], 00:07:34.129 "strip_size_kb": 64, 00:07:34.129 "superblock": false, 00:07:34.129 "method": "bdev_raid_create", 00:07:34.129 "req_id": 1 00:07:34.129 } 00:07:34.129 Got JSON-RPC error response 00:07:34.129 response: 00:07:34.129 { 00:07:34.129 "code": -17, 00:07:34.129 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:34.129 } 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.129 [2024-12-06 09:44:59.283421] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:34.129 [2024-12-06 09:44:59.283512] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:34.129 [2024-12-06 09:44:59.283547] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:34.129 [2024-12-06 09:44:59.283577] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:34.129 [2024-12-06 09:44:59.285838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:34.129 [2024-12-06 09:44:59.285911] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:34.129 [2024-12-06 09:44:59.286011] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:34.129 [2024-12-06 09:44:59.286098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:34.129 pt1 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.129 "name": "raid_bdev1", 00:07:34.129 "uuid": "51107dc9-1eac-4ee8-a838-6d6784e76ad8", 00:07:34.129 "strip_size_kb": 64, 00:07:34.129 "state": "configuring", 00:07:34.129 "raid_level": "concat", 00:07:34.129 "superblock": true, 00:07:34.129 "num_base_bdevs": 2, 00:07:34.129 "num_base_bdevs_discovered": 1, 00:07:34.129 "num_base_bdevs_operational": 2, 00:07:34.129 "base_bdevs_list": [ 00:07:34.129 { 00:07:34.129 "name": "pt1", 00:07:34.129 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:34.129 "is_configured": true, 00:07:34.129 "data_offset": 2048, 00:07:34.129 "data_size": 63488 00:07:34.129 }, 00:07:34.129 { 00:07:34.129 "name": null, 00:07:34.129 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:34.129 "is_configured": false, 00:07:34.129 "data_offset": 2048, 00:07:34.129 "data_size": 63488 00:07:34.129 } 00:07:34.129 ] 00:07:34.129 }' 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.129 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.698 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:34.698 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:34.698 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:34.698 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:34.698 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.698 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.698 [2024-12-06 09:44:59.702734] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:34.698 [2024-12-06 09:44:59.702863] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:34.698 [2024-12-06 09:44:59.702904] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:34.698 [2024-12-06 09:44:59.702935] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:34.698 [2024-12-06 09:44:59.703445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:34.698 [2024-12-06 09:44:59.703511] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:34.698 [2024-12-06 09:44:59.703631] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:34.698 [2024-12-06 09:44:59.703690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:34.698 [2024-12-06 09:44:59.703835] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:34.698 [2024-12-06 09:44:59.703877] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:34.698 [2024-12-06 09:44:59.704136] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:34.698 [2024-12-06 09:44:59.704347] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:34.698 [2024-12-06 09:44:59.704386] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:34.698 [2024-12-06 09:44:59.704568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:34.698 pt2 00:07:34.698 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.698 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:34.698 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:34.698 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:34.698 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:34.698 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:34.698 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:34.698 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.698 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:34.698 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.698 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.698 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.698 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.698 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.698 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:34.698 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.698 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.698 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.698 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.698 "name": "raid_bdev1", 00:07:34.698 "uuid": "51107dc9-1eac-4ee8-a838-6d6784e76ad8", 00:07:34.698 "strip_size_kb": 64, 00:07:34.698 "state": "online", 00:07:34.698 "raid_level": "concat", 00:07:34.698 "superblock": true, 00:07:34.698 "num_base_bdevs": 2, 00:07:34.698 "num_base_bdevs_discovered": 2, 00:07:34.698 "num_base_bdevs_operational": 2, 00:07:34.698 "base_bdevs_list": [ 00:07:34.698 { 00:07:34.698 "name": "pt1", 00:07:34.698 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:34.698 "is_configured": true, 00:07:34.698 "data_offset": 2048, 00:07:34.698 "data_size": 63488 00:07:34.698 }, 00:07:34.698 { 00:07:34.698 "name": "pt2", 00:07:34.698 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:34.698 "is_configured": true, 00:07:34.698 "data_offset": 2048, 00:07:34.698 "data_size": 63488 00:07:34.698 } 00:07:34.698 ] 00:07:34.698 }' 00:07:34.698 09:44:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.698 09:44:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.957 09:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:34.957 09:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:34.957 09:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:34.957 09:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:34.957 09:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:34.957 09:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:34.957 09:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:34.957 09:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:34.957 09:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.957 09:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.957 [2024-12-06 09:45:00.166196] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:34.957 09:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.957 09:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:34.957 "name": "raid_bdev1", 00:07:34.957 "aliases": [ 00:07:34.957 "51107dc9-1eac-4ee8-a838-6d6784e76ad8" 00:07:34.957 ], 00:07:34.957 "product_name": "Raid Volume", 00:07:34.957 "block_size": 512, 00:07:34.957 "num_blocks": 126976, 00:07:34.957 "uuid": "51107dc9-1eac-4ee8-a838-6d6784e76ad8", 00:07:34.957 "assigned_rate_limits": { 00:07:34.957 "rw_ios_per_sec": 0, 00:07:34.957 "rw_mbytes_per_sec": 0, 00:07:34.957 "r_mbytes_per_sec": 0, 00:07:34.957 "w_mbytes_per_sec": 0 00:07:34.957 }, 00:07:34.957 "claimed": false, 00:07:34.957 "zoned": false, 00:07:34.957 "supported_io_types": { 00:07:34.957 "read": true, 00:07:34.957 "write": true, 00:07:34.957 "unmap": true, 00:07:34.957 "flush": true, 00:07:34.957 "reset": true, 00:07:34.957 "nvme_admin": false, 00:07:34.957 "nvme_io": false, 00:07:34.957 "nvme_io_md": false, 00:07:34.957 "write_zeroes": true, 00:07:34.957 "zcopy": false, 00:07:34.958 "get_zone_info": false, 00:07:34.958 "zone_management": false, 00:07:34.958 "zone_append": false, 00:07:34.958 "compare": false, 00:07:34.958 "compare_and_write": false, 00:07:34.958 "abort": false, 00:07:34.958 "seek_hole": false, 00:07:34.958 "seek_data": false, 00:07:34.958 "copy": false, 00:07:34.958 "nvme_iov_md": false 00:07:34.958 }, 00:07:34.958 "memory_domains": [ 00:07:34.958 { 00:07:34.958 "dma_device_id": "system", 00:07:34.958 "dma_device_type": 1 00:07:34.958 }, 00:07:34.958 { 00:07:34.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.958 "dma_device_type": 2 00:07:34.958 }, 00:07:34.958 { 00:07:34.958 "dma_device_id": "system", 00:07:34.958 "dma_device_type": 1 00:07:34.958 }, 00:07:34.958 { 00:07:34.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.958 "dma_device_type": 2 00:07:34.958 } 00:07:34.958 ], 00:07:34.958 "driver_specific": { 00:07:34.958 "raid": { 00:07:34.958 "uuid": "51107dc9-1eac-4ee8-a838-6d6784e76ad8", 00:07:34.958 "strip_size_kb": 64, 00:07:34.958 "state": "online", 00:07:34.958 "raid_level": "concat", 00:07:34.958 "superblock": true, 00:07:34.958 "num_base_bdevs": 2, 00:07:34.958 "num_base_bdevs_discovered": 2, 00:07:34.958 "num_base_bdevs_operational": 2, 00:07:34.958 "base_bdevs_list": [ 00:07:34.958 { 00:07:34.958 "name": "pt1", 00:07:34.958 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:34.958 "is_configured": true, 00:07:34.958 "data_offset": 2048, 00:07:34.958 "data_size": 63488 00:07:34.958 }, 00:07:34.958 { 00:07:34.958 "name": "pt2", 00:07:34.958 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:34.958 "is_configured": true, 00:07:34.958 "data_offset": 2048, 00:07:34.958 "data_size": 63488 00:07:34.958 } 00:07:34.958 ] 00:07:34.958 } 00:07:34.958 } 00:07:34.958 }' 00:07:34.958 09:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:34.958 09:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:34.958 pt2' 00:07:34.958 09:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.217 [2024-12-06 09:45:00.377803] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 51107dc9-1eac-4ee8-a838-6d6784e76ad8 '!=' 51107dc9-1eac-4ee8-a838-6d6784e76ad8 ']' 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62176 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62176 ']' 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62176 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62176 00:07:35.217 killing process with pid 62176 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62176' 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62176 00:07:35.217 [2024-12-06 09:45:00.440975] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:35.217 [2024-12-06 09:45:00.441062] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:35.217 [2024-12-06 09:45:00.441111] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:35.217 [2024-12-06 09:45:00.441138] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:35.217 09:45:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62176 00:07:35.476 [2024-12-06 09:45:00.651307] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:36.852 09:45:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:36.852 00:07:36.852 real 0m4.427s 00:07:36.852 user 0m6.195s 00:07:36.852 sys 0m0.704s 00:07:36.852 09:45:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.852 09:45:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.852 ************************************ 00:07:36.852 END TEST raid_superblock_test 00:07:36.852 ************************************ 00:07:36.852 09:45:01 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:36.852 09:45:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:36.852 09:45:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.852 09:45:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:36.852 ************************************ 00:07:36.852 START TEST raid_read_error_test 00:07:36.852 ************************************ 00:07:36.852 09:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:07:36.852 09:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:36.852 09:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:36.852 09:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:36.852 09:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:36.852 09:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:36.852 09:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:36.852 09:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:36.852 09:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:36.852 09:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:36.852 09:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:36.852 09:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:36.852 09:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:36.852 09:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:36.853 09:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:36.853 09:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:36.853 09:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:36.853 09:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:36.853 09:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:36.853 09:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:36.853 09:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:36.853 09:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:36.853 09:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:36.853 09:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.e4RihEfvng 00:07:36.853 09:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62382 00:07:36.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.853 09:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62382 00:07:36.853 09:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62382 ']' 00:07:36.853 09:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.853 09:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.853 09:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.853 09:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.853 09:45:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:36.853 09:45:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.853 [2024-12-06 09:45:01.934172] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:36.853 [2024-12-06 09:45:01.934310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62382 ] 00:07:36.853 [2024-12-06 09:45:02.090218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.112 [2024-12-06 09:45:02.207892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.371 [2024-12-06 09:45:02.403316] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.371 [2024-12-06 09:45:02.403380] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.629 09:45:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.629 09:45:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:37.629 09:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:37.629 09:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:37.629 09:45:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.629 09:45:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.629 BaseBdev1_malloc 00:07:37.629 09:45:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.629 09:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:37.629 09:45:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.629 09:45:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.629 true 00:07:37.629 09:45:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.629 09:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:37.629 09:45:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.629 09:45:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.629 [2024-12-06 09:45:02.817443] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:37.629 [2024-12-06 09:45:02.817576] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:37.629 [2024-12-06 09:45:02.817621] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:37.629 [2024-12-06 09:45:02.817651] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:37.629 [2024-12-06 09:45:02.819845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:37.629 [2024-12-06 09:45:02.819932] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:37.629 BaseBdev1 00:07:37.629 09:45:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.629 09:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:37.629 09:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:37.629 09:45:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.629 09:45:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.629 BaseBdev2_malloc 00:07:37.629 09:45:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.629 09:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:37.629 09:45:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.629 09:45:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.629 true 00:07:37.629 09:45:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.629 09:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:37.629 09:45:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.630 09:45:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.630 [2024-12-06 09:45:02.883584] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:37.630 [2024-12-06 09:45:02.883689] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:37.630 [2024-12-06 09:45:02.883723] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:37.630 [2024-12-06 09:45:02.883752] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:37.630 [2024-12-06 09:45:02.885811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:37.630 [2024-12-06 09:45:02.885907] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:37.630 BaseBdev2 00:07:37.630 09:45:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.630 09:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:37.630 09:45:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.630 09:45:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.630 [2024-12-06 09:45:02.895632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:37.630 [2024-12-06 09:45:02.897482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:37.630 [2024-12-06 09:45:02.897717] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:37.630 [2024-12-06 09:45:02.897758] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:37.630 [2024-12-06 09:45:02.898040] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:37.630 [2024-12-06 09:45:02.898268] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:37.630 [2024-12-06 09:45:02.898316] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:37.630 [2024-12-06 09:45:02.898507] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.630 09:45:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.630 09:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:37.896 09:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:37.896 09:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:37.896 09:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:37.896 09:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.896 09:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.896 09:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.896 09:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.896 09:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.896 09:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.896 09:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.896 09:45:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.896 09:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:37.896 09:45:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.896 09:45:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.896 09:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.896 "name": "raid_bdev1", 00:07:37.896 "uuid": "cbd95ff8-685d-4f70-919a-d44f7d115e76", 00:07:37.896 "strip_size_kb": 64, 00:07:37.896 "state": "online", 00:07:37.896 "raid_level": "concat", 00:07:37.896 "superblock": true, 00:07:37.896 "num_base_bdevs": 2, 00:07:37.896 "num_base_bdevs_discovered": 2, 00:07:37.896 "num_base_bdevs_operational": 2, 00:07:37.896 "base_bdevs_list": [ 00:07:37.896 { 00:07:37.896 "name": "BaseBdev1", 00:07:37.896 "uuid": "e07ed611-199a-53d9-a9fd-b5422b416259", 00:07:37.896 "is_configured": true, 00:07:37.897 "data_offset": 2048, 00:07:37.897 "data_size": 63488 00:07:37.897 }, 00:07:37.897 { 00:07:37.897 "name": "BaseBdev2", 00:07:37.897 "uuid": "0a0cd9e5-bb59-59f6-8e8d-829cb8e58c4c", 00:07:37.897 "is_configured": true, 00:07:37.897 "data_offset": 2048, 00:07:37.897 "data_size": 63488 00:07:37.897 } 00:07:37.897 ] 00:07:37.897 }' 00:07:37.897 09:45:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.897 09:45:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.171 09:45:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:38.171 09:45:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:38.171 [2024-12-06 09:45:03.412114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:39.130 09:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:39.130 09:45:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.130 09:45:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.130 09:45:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.130 09:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:39.130 09:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:39.130 09:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:39.130 09:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:39.130 09:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:39.130 09:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:39.130 09:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:39.130 09:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.130 09:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.130 09:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.130 09:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.130 09:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.130 09:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.130 09:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.130 09:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:39.130 09:45:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.130 09:45:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.130 09:45:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.130 09:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.130 "name": "raid_bdev1", 00:07:39.130 "uuid": "cbd95ff8-685d-4f70-919a-d44f7d115e76", 00:07:39.130 "strip_size_kb": 64, 00:07:39.130 "state": "online", 00:07:39.130 "raid_level": "concat", 00:07:39.130 "superblock": true, 00:07:39.130 "num_base_bdevs": 2, 00:07:39.130 "num_base_bdevs_discovered": 2, 00:07:39.130 "num_base_bdevs_operational": 2, 00:07:39.130 "base_bdevs_list": [ 00:07:39.130 { 00:07:39.130 "name": "BaseBdev1", 00:07:39.130 "uuid": "e07ed611-199a-53d9-a9fd-b5422b416259", 00:07:39.130 "is_configured": true, 00:07:39.130 "data_offset": 2048, 00:07:39.130 "data_size": 63488 00:07:39.130 }, 00:07:39.130 { 00:07:39.130 "name": "BaseBdev2", 00:07:39.130 "uuid": "0a0cd9e5-bb59-59f6-8e8d-829cb8e58c4c", 00:07:39.130 "is_configured": true, 00:07:39.130 "data_offset": 2048, 00:07:39.130 "data_size": 63488 00:07:39.130 } 00:07:39.130 ] 00:07:39.130 }' 00:07:39.130 09:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.130 09:45:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.698 09:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:39.698 09:45:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.698 09:45:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.698 [2024-12-06 09:45:04.802376] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:39.698 [2024-12-06 09:45:04.802480] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:39.698 [2024-12-06 09:45:04.805238] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:39.698 [2024-12-06 09:45:04.805334] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.698 [2024-12-06 09:45:04.805383] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:39.698 [2024-12-06 09:45:04.805426] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:39.698 { 00:07:39.698 "results": [ 00:07:39.698 { 00:07:39.698 "job": "raid_bdev1", 00:07:39.698 "core_mask": "0x1", 00:07:39.698 "workload": "randrw", 00:07:39.698 "percentage": 50, 00:07:39.698 "status": "finished", 00:07:39.698 "queue_depth": 1, 00:07:39.698 "io_size": 131072, 00:07:39.698 "runtime": 1.391467, 00:07:39.698 "iops": 15715.787726191134, 00:07:39.698 "mibps": 1964.4734657738918, 00:07:39.698 "io_failed": 1, 00:07:39.698 "io_timeout": 0, 00:07:39.698 "avg_latency_us": 88.02301628933381, 00:07:39.698 "min_latency_us": 26.382532751091702, 00:07:39.698 "max_latency_us": 1373.6803493449781 00:07:39.698 } 00:07:39.698 ], 00:07:39.698 "core_count": 1 00:07:39.698 } 00:07:39.698 09:45:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.698 09:45:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62382 00:07:39.698 09:45:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62382 ']' 00:07:39.698 09:45:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62382 00:07:39.698 09:45:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:39.698 09:45:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:39.698 09:45:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62382 00:07:39.698 09:45:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:39.698 09:45:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:39.698 killing process with pid 62382 00:07:39.698 09:45:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62382' 00:07:39.698 09:45:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62382 00:07:39.698 [2024-12-06 09:45:04.840690] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:39.698 09:45:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62382 00:07:39.962 [2024-12-06 09:45:04.977075] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:40.901 09:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.e4RihEfvng 00:07:40.901 09:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:40.901 09:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:40.901 09:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:40.901 09:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:40.901 09:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:40.901 09:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:40.901 ************************************ 00:07:40.901 END TEST raid_read_error_test 00:07:40.901 ************************************ 00:07:40.901 09:45:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:40.901 00:07:40.901 real 0m4.331s 00:07:40.901 user 0m5.193s 00:07:40.901 sys 0m0.525s 00:07:40.901 09:45:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.901 09:45:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.159 09:45:06 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:41.159 09:45:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:41.159 09:45:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.159 09:45:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:41.159 ************************************ 00:07:41.159 START TEST raid_write_error_test 00:07:41.159 ************************************ 00:07:41.159 09:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:07:41.159 09:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:41.159 09:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:41.159 09:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:41.159 09:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:41.159 09:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:41.159 09:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:41.159 09:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:41.159 09:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:41.159 09:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:41.159 09:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:41.159 09:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:41.159 09:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:41.159 09:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:41.159 09:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:41.159 09:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:41.160 09:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:41.160 09:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:41.160 09:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:41.160 09:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:41.160 09:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:41.160 09:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:41.160 09:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:41.160 09:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9UQvlwNej7 00:07:41.160 09:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62523 00:07:41.160 09:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:41.160 09:45:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62523 00:07:41.160 09:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62523 ']' 00:07:41.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.160 09:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.160 09:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.160 09:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.160 09:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.160 09:45:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.160 [2024-12-06 09:45:06.339044] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:41.160 [2024-12-06 09:45:06.339195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62523 ] 00:07:41.418 [2024-12-06 09:45:06.513439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.418 [2024-12-06 09:45:06.632477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.677 [2024-12-06 09:45:06.832677] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.677 [2024-12-06 09:45:06.832742] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.936 09:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.936 09:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:41.936 09:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:41.936 09:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:41.936 09:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.936 09:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.195 BaseBdev1_malloc 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.195 true 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.195 [2024-12-06 09:45:07.229667] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:42.195 [2024-12-06 09:45:07.229767] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:42.195 [2024-12-06 09:45:07.229803] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:42.195 [2024-12-06 09:45:07.229833] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:42.195 [2024-12-06 09:45:07.231853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:42.195 [2024-12-06 09:45:07.231928] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:42.195 BaseBdev1 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.195 BaseBdev2_malloc 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.195 true 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.195 [2024-12-06 09:45:07.298895] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:42.195 [2024-12-06 09:45:07.298991] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:42.195 [2024-12-06 09:45:07.299024] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:42.195 [2024-12-06 09:45:07.299052] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:42.195 [2024-12-06 09:45:07.301108] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:42.195 [2024-12-06 09:45:07.301207] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:42.195 BaseBdev2 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.195 [2024-12-06 09:45:07.310934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:42.195 [2024-12-06 09:45:07.312789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:42.195 [2024-12-06 09:45:07.313034] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:42.195 [2024-12-06 09:45:07.313053] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:42.195 [2024-12-06 09:45:07.313307] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:42.195 [2024-12-06 09:45:07.313494] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:42.195 [2024-12-06 09:45:07.313506] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:42.195 [2024-12-06 09:45:07.313667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.195 09:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.195 "name": "raid_bdev1", 00:07:42.195 "uuid": "6a53f1d2-57eb-47f8-9b03-b4a99da04dc4", 00:07:42.195 "strip_size_kb": 64, 00:07:42.195 "state": "online", 00:07:42.195 "raid_level": "concat", 00:07:42.195 "superblock": true, 00:07:42.195 "num_base_bdevs": 2, 00:07:42.195 "num_base_bdevs_discovered": 2, 00:07:42.195 "num_base_bdevs_operational": 2, 00:07:42.195 "base_bdevs_list": [ 00:07:42.195 { 00:07:42.195 "name": "BaseBdev1", 00:07:42.195 "uuid": "1ef815c7-c9fa-52f9-8896-5e12e9c20e0a", 00:07:42.195 "is_configured": true, 00:07:42.195 "data_offset": 2048, 00:07:42.195 "data_size": 63488 00:07:42.195 }, 00:07:42.195 { 00:07:42.195 "name": "BaseBdev2", 00:07:42.195 "uuid": "967062a3-131c-5551-aaae-a510dce00612", 00:07:42.196 "is_configured": true, 00:07:42.196 "data_offset": 2048, 00:07:42.196 "data_size": 63488 00:07:42.196 } 00:07:42.196 ] 00:07:42.196 }' 00:07:42.196 09:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.196 09:45:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.763 09:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:42.763 09:45:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:42.763 [2024-12-06 09:45:07.827392] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:43.711 09:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:43.711 09:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.711 09:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.711 09:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.711 09:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:43.711 09:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:43.711 09:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:43.711 09:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:43.711 09:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:43.711 09:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:43.711 09:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:43.711 09:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.711 09:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.711 09:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.711 09:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.711 09:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.712 09:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.712 09:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.712 09:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.712 09:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:43.712 09:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.712 09:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.712 09:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.712 "name": "raid_bdev1", 00:07:43.712 "uuid": "6a53f1d2-57eb-47f8-9b03-b4a99da04dc4", 00:07:43.712 "strip_size_kb": 64, 00:07:43.712 "state": "online", 00:07:43.712 "raid_level": "concat", 00:07:43.712 "superblock": true, 00:07:43.712 "num_base_bdevs": 2, 00:07:43.712 "num_base_bdevs_discovered": 2, 00:07:43.712 "num_base_bdevs_operational": 2, 00:07:43.712 "base_bdevs_list": [ 00:07:43.712 { 00:07:43.712 "name": "BaseBdev1", 00:07:43.712 "uuid": "1ef815c7-c9fa-52f9-8896-5e12e9c20e0a", 00:07:43.712 "is_configured": true, 00:07:43.712 "data_offset": 2048, 00:07:43.712 "data_size": 63488 00:07:43.712 }, 00:07:43.712 { 00:07:43.712 "name": "BaseBdev2", 00:07:43.712 "uuid": "967062a3-131c-5551-aaae-a510dce00612", 00:07:43.712 "is_configured": true, 00:07:43.712 "data_offset": 2048, 00:07:43.712 "data_size": 63488 00:07:43.712 } 00:07:43.712 ] 00:07:43.712 }' 00:07:43.712 09:45:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.712 09:45:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.969 09:45:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:43.969 09:45:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.969 09:45:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.969 [2024-12-06 09:45:09.163331] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:43.969 [2024-12-06 09:45:09.163440] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:43.969 [2024-12-06 09:45:09.166473] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:43.969 [2024-12-06 09:45:09.166565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:43.969 [2024-12-06 09:45:09.166621] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:43.969 [2024-12-06 09:45:09.166670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:43.969 { 00:07:43.969 "results": [ 00:07:43.969 { 00:07:43.969 "job": "raid_bdev1", 00:07:43.969 "core_mask": "0x1", 00:07:43.969 "workload": "randrw", 00:07:43.969 "percentage": 50, 00:07:43.969 "status": "finished", 00:07:43.969 "queue_depth": 1, 00:07:43.969 "io_size": 131072, 00:07:43.969 "runtime": 1.336953, 00:07:43.969 "iops": 15224.91815344294, 00:07:43.969 "mibps": 1903.1147691803676, 00:07:43.969 "io_failed": 1, 00:07:43.969 "io_timeout": 0, 00:07:43.969 "avg_latency_us": 90.96824334702556, 00:07:43.969 "min_latency_us": 27.165065502183406, 00:07:43.969 "max_latency_us": 1452.380786026201 00:07:43.969 } 00:07:43.969 ], 00:07:43.969 "core_count": 1 00:07:43.969 } 00:07:43.969 09:45:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.969 09:45:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62523 00:07:43.969 09:45:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62523 ']' 00:07:43.969 09:45:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62523 00:07:43.969 09:45:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:43.969 09:45:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.969 09:45:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62523 00:07:43.969 killing process with pid 62523 00:07:43.969 09:45:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:43.969 09:45:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:43.969 09:45:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62523' 00:07:43.969 09:45:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62523 00:07:43.969 [2024-12-06 09:45:09.197941] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:43.969 09:45:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62523 00:07:44.227 [2024-12-06 09:45:09.338562] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:45.597 09:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9UQvlwNej7 00:07:45.597 09:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:45.597 09:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:45.597 09:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:07:45.597 09:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:45.597 09:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:45.597 09:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:45.597 09:45:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:07:45.597 ************************************ 00:07:45.597 END TEST raid_write_error_test 00:07:45.597 ************************************ 00:07:45.597 00:07:45.597 real 0m4.332s 00:07:45.597 user 0m5.147s 00:07:45.597 sys 0m0.509s 00:07:45.597 09:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.597 09:45:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.597 09:45:10 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:45.597 09:45:10 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:45.597 09:45:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:45.597 09:45:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.597 09:45:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:45.598 ************************************ 00:07:45.598 START TEST raid_state_function_test 00:07:45.598 ************************************ 00:07:45.598 09:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:07:45.598 09:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:45.598 09:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:45.598 09:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:45.598 09:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:45.598 09:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:45.598 09:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:45.598 09:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:45.598 09:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:45.598 09:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:45.598 09:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:45.598 09:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:45.598 09:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:45.598 09:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:45.598 09:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:45.598 09:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:45.598 09:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:45.598 09:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:45.598 09:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:45.598 09:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:45.598 09:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:45.598 09:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:45.598 09:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:45.598 Process raid pid: 62665 00:07:45.598 09:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62665 00:07:45.598 09:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62665' 00:07:45.598 09:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62665 00:07:45.598 09:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62665 ']' 00:07:45.598 09:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.598 09:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.598 09:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.598 09:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.598 09:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.598 09:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:45.598 [2024-12-06 09:45:10.721503] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:45.598 [2024-12-06 09:45:10.721623] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.868 [2024-12-06 09:45:10.894676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.868 [2024-12-06 09:45:11.012713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.126 [2024-12-06 09:45:11.224820] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.126 [2024-12-06 09:45:11.224878] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.383 09:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:46.383 09:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:46.383 09:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:46.383 09:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.383 09:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.383 [2024-12-06 09:45:11.575303] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:46.383 [2024-12-06 09:45:11.575420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:46.383 [2024-12-06 09:45:11.575448] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:46.383 [2024-12-06 09:45:11.575471] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:46.383 09:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.383 09:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:46.383 09:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.384 09:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.384 09:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:46.384 09:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:46.384 09:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.384 09:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.384 09:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.384 09:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.384 09:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.384 09:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.384 09:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.384 09:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.384 09:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.384 09:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.384 09:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.384 "name": "Existed_Raid", 00:07:46.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.384 "strip_size_kb": 0, 00:07:46.384 "state": "configuring", 00:07:46.384 "raid_level": "raid1", 00:07:46.384 "superblock": false, 00:07:46.384 "num_base_bdevs": 2, 00:07:46.384 "num_base_bdevs_discovered": 0, 00:07:46.384 "num_base_bdevs_operational": 2, 00:07:46.384 "base_bdevs_list": [ 00:07:46.384 { 00:07:46.384 "name": "BaseBdev1", 00:07:46.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.384 "is_configured": false, 00:07:46.384 "data_offset": 0, 00:07:46.384 "data_size": 0 00:07:46.384 }, 00:07:46.384 { 00:07:46.384 "name": "BaseBdev2", 00:07:46.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.384 "is_configured": false, 00:07:46.384 "data_offset": 0, 00:07:46.384 "data_size": 0 00:07:46.384 } 00:07:46.384 ] 00:07:46.384 }' 00:07:46.384 09:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.384 09:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.950 09:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:46.950 09:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.950 09:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.950 [2024-12-06 09:45:11.990560] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:46.950 [2024-12-06 09:45:11.990694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:46.950 09:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.950 09:45:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:46.950 09:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.950 09:45:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.950 [2024-12-06 09:45:12.002517] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:46.950 [2024-12-06 09:45:12.002624] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:46.950 [2024-12-06 09:45:12.002652] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:46.950 [2024-12-06 09:45:12.002678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:46.950 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.950 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:46.950 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.950 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.950 [2024-12-06 09:45:12.050762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:46.950 BaseBdev1 00:07:46.950 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.950 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:46.950 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:46.950 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:46.950 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:46.950 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:46.950 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:46.950 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:46.950 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.950 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.950 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.950 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:46.950 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.950 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.950 [ 00:07:46.950 { 00:07:46.950 "name": "BaseBdev1", 00:07:46.950 "aliases": [ 00:07:46.950 "fc7b84fe-e3cf-4db2-a606-8b28908e4b49" 00:07:46.950 ], 00:07:46.950 "product_name": "Malloc disk", 00:07:46.950 "block_size": 512, 00:07:46.950 "num_blocks": 65536, 00:07:46.950 "uuid": "fc7b84fe-e3cf-4db2-a606-8b28908e4b49", 00:07:46.950 "assigned_rate_limits": { 00:07:46.950 "rw_ios_per_sec": 0, 00:07:46.950 "rw_mbytes_per_sec": 0, 00:07:46.950 "r_mbytes_per_sec": 0, 00:07:46.950 "w_mbytes_per_sec": 0 00:07:46.950 }, 00:07:46.950 "claimed": true, 00:07:46.950 "claim_type": "exclusive_write", 00:07:46.950 "zoned": false, 00:07:46.950 "supported_io_types": { 00:07:46.950 "read": true, 00:07:46.950 "write": true, 00:07:46.950 "unmap": true, 00:07:46.950 "flush": true, 00:07:46.950 "reset": true, 00:07:46.950 "nvme_admin": false, 00:07:46.950 "nvme_io": false, 00:07:46.950 "nvme_io_md": false, 00:07:46.950 "write_zeroes": true, 00:07:46.950 "zcopy": true, 00:07:46.950 "get_zone_info": false, 00:07:46.950 "zone_management": false, 00:07:46.950 "zone_append": false, 00:07:46.950 "compare": false, 00:07:46.950 "compare_and_write": false, 00:07:46.950 "abort": true, 00:07:46.950 "seek_hole": false, 00:07:46.950 "seek_data": false, 00:07:46.950 "copy": true, 00:07:46.950 "nvme_iov_md": false 00:07:46.950 }, 00:07:46.950 "memory_domains": [ 00:07:46.950 { 00:07:46.950 "dma_device_id": "system", 00:07:46.950 "dma_device_type": 1 00:07:46.950 }, 00:07:46.950 { 00:07:46.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.950 "dma_device_type": 2 00:07:46.950 } 00:07:46.950 ], 00:07:46.950 "driver_specific": {} 00:07:46.950 } 00:07:46.950 ] 00:07:46.950 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.950 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:46.950 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:46.950 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.950 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.950 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:46.950 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:46.950 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.950 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.950 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.950 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.950 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.950 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.950 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.950 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.951 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.951 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.951 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.951 "name": "Existed_Raid", 00:07:46.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.951 "strip_size_kb": 0, 00:07:46.951 "state": "configuring", 00:07:46.951 "raid_level": "raid1", 00:07:46.951 "superblock": false, 00:07:46.951 "num_base_bdevs": 2, 00:07:46.951 "num_base_bdevs_discovered": 1, 00:07:46.951 "num_base_bdevs_operational": 2, 00:07:46.951 "base_bdevs_list": [ 00:07:46.951 { 00:07:46.951 "name": "BaseBdev1", 00:07:46.951 "uuid": "fc7b84fe-e3cf-4db2-a606-8b28908e4b49", 00:07:46.951 "is_configured": true, 00:07:46.951 "data_offset": 0, 00:07:46.951 "data_size": 65536 00:07:46.951 }, 00:07:46.951 { 00:07:46.951 "name": "BaseBdev2", 00:07:46.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.951 "is_configured": false, 00:07:46.951 "data_offset": 0, 00:07:46.951 "data_size": 0 00:07:46.951 } 00:07:46.951 ] 00:07:46.951 }' 00:07:46.951 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.951 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.519 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:47.519 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.519 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.519 [2024-12-06 09:45:12.494070] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:47.519 [2024-12-06 09:45:12.494213] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:47.519 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.519 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:47.519 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.519 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.519 [2024-12-06 09:45:12.506094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:47.519 [2024-12-06 09:45:12.507987] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:47.519 [2024-12-06 09:45:12.508072] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:47.519 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.519 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:47.519 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:47.519 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:47.519 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:47.519 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:47.519 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:47.519 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:47.519 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.519 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.519 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.519 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.519 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.519 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.519 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.519 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.519 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.519 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.519 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.519 "name": "Existed_Raid", 00:07:47.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.519 "strip_size_kb": 0, 00:07:47.519 "state": "configuring", 00:07:47.519 "raid_level": "raid1", 00:07:47.519 "superblock": false, 00:07:47.519 "num_base_bdevs": 2, 00:07:47.519 "num_base_bdevs_discovered": 1, 00:07:47.519 "num_base_bdevs_operational": 2, 00:07:47.519 "base_bdevs_list": [ 00:07:47.519 { 00:07:47.519 "name": "BaseBdev1", 00:07:47.519 "uuid": "fc7b84fe-e3cf-4db2-a606-8b28908e4b49", 00:07:47.519 "is_configured": true, 00:07:47.519 "data_offset": 0, 00:07:47.519 "data_size": 65536 00:07:47.519 }, 00:07:47.519 { 00:07:47.519 "name": "BaseBdev2", 00:07:47.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.519 "is_configured": false, 00:07:47.519 "data_offset": 0, 00:07:47.519 "data_size": 0 00:07:47.519 } 00:07:47.519 ] 00:07:47.519 }' 00:07:47.519 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.519 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.784 [2024-12-06 09:45:12.925474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:47.784 [2024-12-06 09:45:12.925621] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:47.784 [2024-12-06 09:45:12.925646] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:47.784 [2024-12-06 09:45:12.925920] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:47.784 [2024-12-06 09:45:12.926133] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:47.784 [2024-12-06 09:45:12.926192] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:47.784 [2024-12-06 09:45:12.926507] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.784 BaseBdev2 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.784 [ 00:07:47.784 { 00:07:47.784 "name": "BaseBdev2", 00:07:47.784 "aliases": [ 00:07:47.784 "931bb1ce-11e3-4de4-82cc-44666da21c72" 00:07:47.784 ], 00:07:47.784 "product_name": "Malloc disk", 00:07:47.784 "block_size": 512, 00:07:47.784 "num_blocks": 65536, 00:07:47.784 "uuid": "931bb1ce-11e3-4de4-82cc-44666da21c72", 00:07:47.784 "assigned_rate_limits": { 00:07:47.784 "rw_ios_per_sec": 0, 00:07:47.784 "rw_mbytes_per_sec": 0, 00:07:47.784 "r_mbytes_per_sec": 0, 00:07:47.784 "w_mbytes_per_sec": 0 00:07:47.784 }, 00:07:47.784 "claimed": true, 00:07:47.784 "claim_type": "exclusive_write", 00:07:47.784 "zoned": false, 00:07:47.784 "supported_io_types": { 00:07:47.784 "read": true, 00:07:47.784 "write": true, 00:07:47.784 "unmap": true, 00:07:47.784 "flush": true, 00:07:47.784 "reset": true, 00:07:47.784 "nvme_admin": false, 00:07:47.784 "nvme_io": false, 00:07:47.784 "nvme_io_md": false, 00:07:47.784 "write_zeroes": true, 00:07:47.784 "zcopy": true, 00:07:47.784 "get_zone_info": false, 00:07:47.784 "zone_management": false, 00:07:47.784 "zone_append": false, 00:07:47.784 "compare": false, 00:07:47.784 "compare_and_write": false, 00:07:47.784 "abort": true, 00:07:47.784 "seek_hole": false, 00:07:47.784 "seek_data": false, 00:07:47.784 "copy": true, 00:07:47.784 "nvme_iov_md": false 00:07:47.784 }, 00:07:47.784 "memory_domains": [ 00:07:47.784 { 00:07:47.784 "dma_device_id": "system", 00:07:47.784 "dma_device_type": 1 00:07:47.784 }, 00:07:47.784 { 00:07:47.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.784 "dma_device_type": 2 00:07:47.784 } 00:07:47.784 ], 00:07:47.784 "driver_specific": {} 00:07:47.784 } 00:07:47.784 ] 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.784 09:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.784 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.784 "name": "Existed_Raid", 00:07:47.784 "uuid": "0fb1dbf7-79b3-4c1f-a07c-bf8435073a03", 00:07:47.784 "strip_size_kb": 0, 00:07:47.784 "state": "online", 00:07:47.784 "raid_level": "raid1", 00:07:47.784 "superblock": false, 00:07:47.784 "num_base_bdevs": 2, 00:07:47.784 "num_base_bdevs_discovered": 2, 00:07:47.784 "num_base_bdevs_operational": 2, 00:07:47.784 "base_bdevs_list": [ 00:07:47.784 { 00:07:47.784 "name": "BaseBdev1", 00:07:47.784 "uuid": "fc7b84fe-e3cf-4db2-a606-8b28908e4b49", 00:07:47.784 "is_configured": true, 00:07:47.784 "data_offset": 0, 00:07:47.784 "data_size": 65536 00:07:47.784 }, 00:07:47.784 { 00:07:47.784 "name": "BaseBdev2", 00:07:47.784 "uuid": "931bb1ce-11e3-4de4-82cc-44666da21c72", 00:07:47.784 "is_configured": true, 00:07:47.784 "data_offset": 0, 00:07:47.784 "data_size": 65536 00:07:47.784 } 00:07:47.784 ] 00:07:47.784 }' 00:07:47.784 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.784 09:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.378 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:48.378 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:48.378 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:48.378 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:48.378 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:48.378 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:48.378 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:48.378 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:48.378 09:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.378 09:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.378 [2024-12-06 09:45:13.345099] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:48.378 09:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.378 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:48.378 "name": "Existed_Raid", 00:07:48.378 "aliases": [ 00:07:48.378 "0fb1dbf7-79b3-4c1f-a07c-bf8435073a03" 00:07:48.378 ], 00:07:48.378 "product_name": "Raid Volume", 00:07:48.378 "block_size": 512, 00:07:48.378 "num_blocks": 65536, 00:07:48.378 "uuid": "0fb1dbf7-79b3-4c1f-a07c-bf8435073a03", 00:07:48.378 "assigned_rate_limits": { 00:07:48.378 "rw_ios_per_sec": 0, 00:07:48.378 "rw_mbytes_per_sec": 0, 00:07:48.379 "r_mbytes_per_sec": 0, 00:07:48.379 "w_mbytes_per_sec": 0 00:07:48.379 }, 00:07:48.379 "claimed": false, 00:07:48.379 "zoned": false, 00:07:48.379 "supported_io_types": { 00:07:48.379 "read": true, 00:07:48.379 "write": true, 00:07:48.379 "unmap": false, 00:07:48.379 "flush": false, 00:07:48.379 "reset": true, 00:07:48.379 "nvme_admin": false, 00:07:48.379 "nvme_io": false, 00:07:48.379 "nvme_io_md": false, 00:07:48.379 "write_zeroes": true, 00:07:48.379 "zcopy": false, 00:07:48.379 "get_zone_info": false, 00:07:48.379 "zone_management": false, 00:07:48.379 "zone_append": false, 00:07:48.379 "compare": false, 00:07:48.379 "compare_and_write": false, 00:07:48.379 "abort": false, 00:07:48.379 "seek_hole": false, 00:07:48.379 "seek_data": false, 00:07:48.379 "copy": false, 00:07:48.379 "nvme_iov_md": false 00:07:48.379 }, 00:07:48.379 "memory_domains": [ 00:07:48.379 { 00:07:48.379 "dma_device_id": "system", 00:07:48.379 "dma_device_type": 1 00:07:48.379 }, 00:07:48.379 { 00:07:48.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.379 "dma_device_type": 2 00:07:48.379 }, 00:07:48.379 { 00:07:48.379 "dma_device_id": "system", 00:07:48.379 "dma_device_type": 1 00:07:48.379 }, 00:07:48.379 { 00:07:48.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.379 "dma_device_type": 2 00:07:48.379 } 00:07:48.379 ], 00:07:48.379 "driver_specific": { 00:07:48.379 "raid": { 00:07:48.379 "uuid": "0fb1dbf7-79b3-4c1f-a07c-bf8435073a03", 00:07:48.379 "strip_size_kb": 0, 00:07:48.379 "state": "online", 00:07:48.379 "raid_level": "raid1", 00:07:48.379 "superblock": false, 00:07:48.379 "num_base_bdevs": 2, 00:07:48.379 "num_base_bdevs_discovered": 2, 00:07:48.379 "num_base_bdevs_operational": 2, 00:07:48.379 "base_bdevs_list": [ 00:07:48.379 { 00:07:48.379 "name": "BaseBdev1", 00:07:48.379 "uuid": "fc7b84fe-e3cf-4db2-a606-8b28908e4b49", 00:07:48.379 "is_configured": true, 00:07:48.379 "data_offset": 0, 00:07:48.379 "data_size": 65536 00:07:48.379 }, 00:07:48.379 { 00:07:48.379 "name": "BaseBdev2", 00:07:48.379 "uuid": "931bb1ce-11e3-4de4-82cc-44666da21c72", 00:07:48.379 "is_configured": true, 00:07:48.379 "data_offset": 0, 00:07:48.379 "data_size": 65536 00:07:48.379 } 00:07:48.379 ] 00:07:48.379 } 00:07:48.379 } 00:07:48.379 }' 00:07:48.379 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:48.379 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:48.379 BaseBdev2' 00:07:48.379 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.379 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:48.379 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.379 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.379 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:48.379 09:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.379 09:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.379 09:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.379 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.379 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.379 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.379 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:48.379 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.379 09:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.379 09:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.379 09:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.379 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.379 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.379 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:48.379 09:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.379 09:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.379 [2024-12-06 09:45:13.564512] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:48.639 09:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.639 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:48.639 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:48.639 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:48.639 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:48.639 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:48.639 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:48.639 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:48.639 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:48.639 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:48.639 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:48.639 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:48.639 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.639 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.639 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.639 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.639 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.639 09:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.639 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:48.639 09:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.639 09:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.639 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.639 "name": "Existed_Raid", 00:07:48.639 "uuid": "0fb1dbf7-79b3-4c1f-a07c-bf8435073a03", 00:07:48.639 "strip_size_kb": 0, 00:07:48.639 "state": "online", 00:07:48.639 "raid_level": "raid1", 00:07:48.639 "superblock": false, 00:07:48.639 "num_base_bdevs": 2, 00:07:48.639 "num_base_bdevs_discovered": 1, 00:07:48.639 "num_base_bdevs_operational": 1, 00:07:48.639 "base_bdevs_list": [ 00:07:48.639 { 00:07:48.639 "name": null, 00:07:48.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.639 "is_configured": false, 00:07:48.639 "data_offset": 0, 00:07:48.639 "data_size": 65536 00:07:48.639 }, 00:07:48.639 { 00:07:48.639 "name": "BaseBdev2", 00:07:48.639 "uuid": "931bb1ce-11e3-4de4-82cc-44666da21c72", 00:07:48.639 "is_configured": true, 00:07:48.639 "data_offset": 0, 00:07:48.639 "data_size": 65536 00:07:48.639 } 00:07:48.639 ] 00:07:48.639 }' 00:07:48.639 09:45:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.639 09:45:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.900 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:48.900 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:48.900 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.900 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:48.900 09:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.900 09:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.900 09:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.900 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:48.900 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:48.900 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:48.900 09:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.900 09:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.900 [2024-12-06 09:45:14.114170] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:48.900 [2024-12-06 09:45:14.114313] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:49.159 [2024-12-06 09:45:14.210172] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:49.159 [2024-12-06 09:45:14.210332] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:49.159 [2024-12-06 09:45:14.210376] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:49.159 09:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.159 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:49.159 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:49.159 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.159 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:49.159 09:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.159 09:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.159 09:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.159 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:49.159 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:49.159 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:49.159 09:45:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62665 00:07:49.159 09:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62665 ']' 00:07:49.159 09:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62665 00:07:49.159 09:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:49.159 09:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:49.159 09:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62665 00:07:49.159 killing process with pid 62665 00:07:49.159 09:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:49.159 09:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:49.159 09:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62665' 00:07:49.159 09:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62665 00:07:49.159 [2024-12-06 09:45:14.301468] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:49.159 09:45:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62665 00:07:49.159 [2024-12-06 09:45:14.318286] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:50.536 09:45:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:50.536 00:07:50.536 real 0m4.838s 00:07:50.536 user 0m6.879s 00:07:50.536 sys 0m0.770s 00:07:50.536 ************************************ 00:07:50.536 END TEST raid_state_function_test 00:07:50.536 ************************************ 00:07:50.536 09:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.536 09:45:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.536 09:45:15 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:50.536 09:45:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:50.536 09:45:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.536 09:45:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:50.536 ************************************ 00:07:50.536 START TEST raid_state_function_test_sb 00:07:50.536 ************************************ 00:07:50.536 09:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:07:50.536 09:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:50.536 09:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:50.536 09:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:50.536 09:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:50.536 09:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:50.536 09:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:50.536 09:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:50.536 09:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:50.536 09:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:50.536 09:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:50.536 09:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:50.536 09:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:50.536 09:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:50.536 09:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:50.536 09:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:50.536 09:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:50.536 09:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:50.536 09:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:50.536 09:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:50.536 09:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:50.536 09:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:50.536 09:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:50.536 Process raid pid: 62913 00:07:50.536 09:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62913 00:07:50.536 09:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62913' 00:07:50.536 09:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62913 00:07:50.536 09:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62913 ']' 00:07:50.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.536 09:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.536 09:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.537 09:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.537 09:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.537 09:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:50.537 09:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.537 [2024-12-06 09:45:15.620675] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:50.537 [2024-12-06 09:45:15.620878] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.537 [2024-12-06 09:45:15.792853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.795 [2024-12-06 09:45:15.913774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.053 [2024-12-06 09:45:16.122396] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:51.053 [2024-12-06 09:45:16.122518] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:51.312 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.312 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:51.312 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:51.312 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.312 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.312 [2024-12-06 09:45:16.446872] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:51.312 [2024-12-06 09:45:16.446971] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:51.312 [2024-12-06 09:45:16.447001] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:51.312 [2024-12-06 09:45:16.447025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:51.312 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.312 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:51.312 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.312 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.312 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:51.312 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:51.312 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.312 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.312 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.312 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.312 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.312 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.312 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.312 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.312 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.312 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.312 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.312 "name": "Existed_Raid", 00:07:51.312 "uuid": "fae1d84a-4391-4dd1-bc75-c26880702115", 00:07:51.312 "strip_size_kb": 0, 00:07:51.312 "state": "configuring", 00:07:51.312 "raid_level": "raid1", 00:07:51.312 "superblock": true, 00:07:51.312 "num_base_bdevs": 2, 00:07:51.312 "num_base_bdevs_discovered": 0, 00:07:51.312 "num_base_bdevs_operational": 2, 00:07:51.312 "base_bdevs_list": [ 00:07:51.312 { 00:07:51.312 "name": "BaseBdev1", 00:07:51.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.312 "is_configured": false, 00:07:51.312 "data_offset": 0, 00:07:51.312 "data_size": 0 00:07:51.312 }, 00:07:51.312 { 00:07:51.312 "name": "BaseBdev2", 00:07:51.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.312 "is_configured": false, 00:07:51.312 "data_offset": 0, 00:07:51.312 "data_size": 0 00:07:51.312 } 00:07:51.312 ] 00:07:51.312 }' 00:07:51.312 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.312 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.880 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:51.880 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.880 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.880 [2024-12-06 09:45:16.914040] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:51.880 [2024-12-06 09:45:16.914121] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:51.880 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.880 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:51.880 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.880 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.881 [2024-12-06 09:45:16.926008] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:51.881 [2024-12-06 09:45:16.926089] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:51.881 [2024-12-06 09:45:16.926116] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:51.881 [2024-12-06 09:45:16.926150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:51.881 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.881 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:51.881 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.881 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.881 [2024-12-06 09:45:16.974369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:51.881 BaseBdev1 00:07:51.881 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.881 09:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:51.881 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:51.881 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:51.881 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:51.881 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:51.881 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:51.881 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:51.881 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.881 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.881 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.881 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:51.881 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.881 09:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.881 [ 00:07:51.881 { 00:07:51.881 "name": "BaseBdev1", 00:07:51.881 "aliases": [ 00:07:51.881 "1806382f-06b2-4f92-b68c-0ec42862ad8f" 00:07:51.881 ], 00:07:51.881 "product_name": "Malloc disk", 00:07:51.881 "block_size": 512, 00:07:51.881 "num_blocks": 65536, 00:07:51.881 "uuid": "1806382f-06b2-4f92-b68c-0ec42862ad8f", 00:07:51.881 "assigned_rate_limits": { 00:07:51.881 "rw_ios_per_sec": 0, 00:07:51.881 "rw_mbytes_per_sec": 0, 00:07:51.881 "r_mbytes_per_sec": 0, 00:07:51.881 "w_mbytes_per_sec": 0 00:07:51.881 }, 00:07:51.881 "claimed": true, 00:07:51.881 "claim_type": "exclusive_write", 00:07:51.881 "zoned": false, 00:07:51.881 "supported_io_types": { 00:07:51.881 "read": true, 00:07:51.881 "write": true, 00:07:51.881 "unmap": true, 00:07:51.881 "flush": true, 00:07:51.881 "reset": true, 00:07:51.881 "nvme_admin": false, 00:07:51.881 "nvme_io": false, 00:07:51.881 "nvme_io_md": false, 00:07:51.881 "write_zeroes": true, 00:07:51.881 "zcopy": true, 00:07:51.881 "get_zone_info": false, 00:07:51.881 "zone_management": false, 00:07:51.881 "zone_append": false, 00:07:51.881 "compare": false, 00:07:51.881 "compare_and_write": false, 00:07:51.881 "abort": true, 00:07:51.881 "seek_hole": false, 00:07:51.881 "seek_data": false, 00:07:51.881 "copy": true, 00:07:51.881 "nvme_iov_md": false 00:07:51.881 }, 00:07:51.881 "memory_domains": [ 00:07:51.881 { 00:07:51.881 "dma_device_id": "system", 00:07:51.881 "dma_device_type": 1 00:07:51.881 }, 00:07:51.881 { 00:07:51.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.881 "dma_device_type": 2 00:07:51.881 } 00:07:51.881 ], 00:07:51.881 "driver_specific": {} 00:07:51.881 } 00:07:51.881 ] 00:07:51.881 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.881 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:51.881 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:51.881 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.881 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.881 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:51.881 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:51.881 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.881 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.881 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.881 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.881 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.881 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.881 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.881 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.881 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.881 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.881 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.881 "name": "Existed_Raid", 00:07:51.881 "uuid": "6d998698-fda9-41e9-9b4d-975602100602", 00:07:51.881 "strip_size_kb": 0, 00:07:51.881 "state": "configuring", 00:07:51.881 "raid_level": "raid1", 00:07:51.881 "superblock": true, 00:07:51.881 "num_base_bdevs": 2, 00:07:51.881 "num_base_bdevs_discovered": 1, 00:07:51.881 "num_base_bdevs_operational": 2, 00:07:51.881 "base_bdevs_list": [ 00:07:51.881 { 00:07:51.881 "name": "BaseBdev1", 00:07:51.881 "uuid": "1806382f-06b2-4f92-b68c-0ec42862ad8f", 00:07:51.881 "is_configured": true, 00:07:51.881 "data_offset": 2048, 00:07:51.881 "data_size": 63488 00:07:51.881 }, 00:07:51.881 { 00:07:51.881 "name": "BaseBdev2", 00:07:51.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.881 "is_configured": false, 00:07:51.881 "data_offset": 0, 00:07:51.881 "data_size": 0 00:07:51.881 } 00:07:51.881 ] 00:07:51.881 }' 00:07:51.881 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.881 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.449 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:52.449 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.449 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.449 [2024-12-06 09:45:17.441608] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:52.449 [2024-12-06 09:45:17.441671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:52.449 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.449 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:52.449 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.449 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.449 [2024-12-06 09:45:17.453618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:52.449 [2024-12-06 09:45:17.455360] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:52.449 [2024-12-06 09:45:17.455399] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:52.449 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.449 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:52.449 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:52.450 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:52.450 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.450 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:52.450 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:52.450 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:52.450 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.450 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.450 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.450 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.450 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.450 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.450 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.450 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.450 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.450 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.450 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.450 "name": "Existed_Raid", 00:07:52.450 "uuid": "dff3d245-3740-42af-b593-6bc68bf73c03", 00:07:52.450 "strip_size_kb": 0, 00:07:52.450 "state": "configuring", 00:07:52.450 "raid_level": "raid1", 00:07:52.450 "superblock": true, 00:07:52.450 "num_base_bdevs": 2, 00:07:52.450 "num_base_bdevs_discovered": 1, 00:07:52.450 "num_base_bdevs_operational": 2, 00:07:52.450 "base_bdevs_list": [ 00:07:52.450 { 00:07:52.450 "name": "BaseBdev1", 00:07:52.450 "uuid": "1806382f-06b2-4f92-b68c-0ec42862ad8f", 00:07:52.450 "is_configured": true, 00:07:52.450 "data_offset": 2048, 00:07:52.450 "data_size": 63488 00:07:52.450 }, 00:07:52.450 { 00:07:52.450 "name": "BaseBdev2", 00:07:52.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.450 "is_configured": false, 00:07:52.450 "data_offset": 0, 00:07:52.450 "data_size": 0 00:07:52.450 } 00:07:52.450 ] 00:07:52.450 }' 00:07:52.450 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.450 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.711 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:52.711 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.711 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.711 [2024-12-06 09:45:17.950357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:52.711 [2024-12-06 09:45:17.950613] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:52.711 [2024-12-06 09:45:17.950629] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:52.711 [2024-12-06 09:45:17.950870] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:52.711 [2024-12-06 09:45:17.951037] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:52.711 [2024-12-06 09:45:17.951051] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:52.711 [2024-12-06 09:45:17.951210] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.711 BaseBdev2 00:07:52.711 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.711 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:52.711 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:52.711 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:52.711 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:52.711 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:52.711 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:52.711 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:52.711 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.711 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.711 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.711 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:52.711 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.711 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.711 [ 00:07:52.977 { 00:07:52.977 "name": "BaseBdev2", 00:07:52.977 "aliases": [ 00:07:52.977 "0b99dc56-4520-4071-acad-e019644124ae" 00:07:52.977 ], 00:07:52.977 "product_name": "Malloc disk", 00:07:52.977 "block_size": 512, 00:07:52.977 "num_blocks": 65536, 00:07:52.977 "uuid": "0b99dc56-4520-4071-acad-e019644124ae", 00:07:52.977 "assigned_rate_limits": { 00:07:52.977 "rw_ios_per_sec": 0, 00:07:52.977 "rw_mbytes_per_sec": 0, 00:07:52.977 "r_mbytes_per_sec": 0, 00:07:52.977 "w_mbytes_per_sec": 0 00:07:52.977 }, 00:07:52.977 "claimed": true, 00:07:52.977 "claim_type": "exclusive_write", 00:07:52.978 "zoned": false, 00:07:52.978 "supported_io_types": { 00:07:52.978 "read": true, 00:07:52.978 "write": true, 00:07:52.978 "unmap": true, 00:07:52.978 "flush": true, 00:07:52.978 "reset": true, 00:07:52.978 "nvme_admin": false, 00:07:52.978 "nvme_io": false, 00:07:52.978 "nvme_io_md": false, 00:07:52.978 "write_zeroes": true, 00:07:52.978 "zcopy": true, 00:07:52.978 "get_zone_info": false, 00:07:52.978 "zone_management": false, 00:07:52.978 "zone_append": false, 00:07:52.978 "compare": false, 00:07:52.978 "compare_and_write": false, 00:07:52.978 "abort": true, 00:07:52.978 "seek_hole": false, 00:07:52.978 "seek_data": false, 00:07:52.978 "copy": true, 00:07:52.978 "nvme_iov_md": false 00:07:52.978 }, 00:07:52.978 "memory_domains": [ 00:07:52.978 { 00:07:52.978 "dma_device_id": "system", 00:07:52.978 "dma_device_type": 1 00:07:52.978 }, 00:07:52.978 { 00:07:52.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.978 "dma_device_type": 2 00:07:52.978 } 00:07:52.978 ], 00:07:52.978 "driver_specific": {} 00:07:52.978 } 00:07:52.978 ] 00:07:52.978 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.978 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:52.978 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:52.978 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:52.978 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:52.978 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.978 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.978 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:52.978 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:52.978 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.978 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.978 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.978 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.978 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.978 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.978 09:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.978 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.978 09:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.978 09:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.978 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.978 "name": "Existed_Raid", 00:07:52.978 "uuid": "dff3d245-3740-42af-b593-6bc68bf73c03", 00:07:52.978 "strip_size_kb": 0, 00:07:52.978 "state": "online", 00:07:52.978 "raid_level": "raid1", 00:07:52.978 "superblock": true, 00:07:52.978 "num_base_bdevs": 2, 00:07:52.978 "num_base_bdevs_discovered": 2, 00:07:52.978 "num_base_bdevs_operational": 2, 00:07:52.978 "base_bdevs_list": [ 00:07:52.978 { 00:07:52.978 "name": "BaseBdev1", 00:07:52.978 "uuid": "1806382f-06b2-4f92-b68c-0ec42862ad8f", 00:07:52.978 "is_configured": true, 00:07:52.978 "data_offset": 2048, 00:07:52.978 "data_size": 63488 00:07:52.978 }, 00:07:52.978 { 00:07:52.978 "name": "BaseBdev2", 00:07:52.978 "uuid": "0b99dc56-4520-4071-acad-e019644124ae", 00:07:52.978 "is_configured": true, 00:07:52.978 "data_offset": 2048, 00:07:52.978 "data_size": 63488 00:07:52.978 } 00:07:52.978 ] 00:07:52.978 }' 00:07:52.978 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.978 09:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.237 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:53.237 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:53.237 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:53.237 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:53.237 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:53.237 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:53.237 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:53.237 09:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.237 09:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.237 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:53.237 [2024-12-06 09:45:18.433864] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:53.237 09:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.237 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:53.237 "name": "Existed_Raid", 00:07:53.237 "aliases": [ 00:07:53.237 "dff3d245-3740-42af-b593-6bc68bf73c03" 00:07:53.237 ], 00:07:53.237 "product_name": "Raid Volume", 00:07:53.237 "block_size": 512, 00:07:53.237 "num_blocks": 63488, 00:07:53.237 "uuid": "dff3d245-3740-42af-b593-6bc68bf73c03", 00:07:53.237 "assigned_rate_limits": { 00:07:53.237 "rw_ios_per_sec": 0, 00:07:53.237 "rw_mbytes_per_sec": 0, 00:07:53.237 "r_mbytes_per_sec": 0, 00:07:53.237 "w_mbytes_per_sec": 0 00:07:53.237 }, 00:07:53.237 "claimed": false, 00:07:53.237 "zoned": false, 00:07:53.237 "supported_io_types": { 00:07:53.237 "read": true, 00:07:53.237 "write": true, 00:07:53.237 "unmap": false, 00:07:53.237 "flush": false, 00:07:53.237 "reset": true, 00:07:53.237 "nvme_admin": false, 00:07:53.237 "nvme_io": false, 00:07:53.237 "nvme_io_md": false, 00:07:53.237 "write_zeroes": true, 00:07:53.237 "zcopy": false, 00:07:53.237 "get_zone_info": false, 00:07:53.237 "zone_management": false, 00:07:53.237 "zone_append": false, 00:07:53.237 "compare": false, 00:07:53.237 "compare_and_write": false, 00:07:53.237 "abort": false, 00:07:53.237 "seek_hole": false, 00:07:53.237 "seek_data": false, 00:07:53.237 "copy": false, 00:07:53.237 "nvme_iov_md": false 00:07:53.237 }, 00:07:53.237 "memory_domains": [ 00:07:53.237 { 00:07:53.237 "dma_device_id": "system", 00:07:53.237 "dma_device_type": 1 00:07:53.237 }, 00:07:53.237 { 00:07:53.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.237 "dma_device_type": 2 00:07:53.237 }, 00:07:53.237 { 00:07:53.237 "dma_device_id": "system", 00:07:53.237 "dma_device_type": 1 00:07:53.237 }, 00:07:53.237 { 00:07:53.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.237 "dma_device_type": 2 00:07:53.237 } 00:07:53.237 ], 00:07:53.237 "driver_specific": { 00:07:53.237 "raid": { 00:07:53.237 "uuid": "dff3d245-3740-42af-b593-6bc68bf73c03", 00:07:53.237 "strip_size_kb": 0, 00:07:53.237 "state": "online", 00:07:53.237 "raid_level": "raid1", 00:07:53.237 "superblock": true, 00:07:53.237 "num_base_bdevs": 2, 00:07:53.237 "num_base_bdevs_discovered": 2, 00:07:53.237 "num_base_bdevs_operational": 2, 00:07:53.237 "base_bdevs_list": [ 00:07:53.237 { 00:07:53.237 "name": "BaseBdev1", 00:07:53.237 "uuid": "1806382f-06b2-4f92-b68c-0ec42862ad8f", 00:07:53.237 "is_configured": true, 00:07:53.237 "data_offset": 2048, 00:07:53.237 "data_size": 63488 00:07:53.237 }, 00:07:53.237 { 00:07:53.237 "name": "BaseBdev2", 00:07:53.237 "uuid": "0b99dc56-4520-4071-acad-e019644124ae", 00:07:53.237 "is_configured": true, 00:07:53.237 "data_offset": 2048, 00:07:53.237 "data_size": 63488 00:07:53.237 } 00:07:53.237 ] 00:07:53.238 } 00:07:53.238 } 00:07:53.238 }' 00:07:53.238 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:53.497 BaseBdev2' 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.497 [2024-12-06 09:45:18.629302] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.497 09:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.756 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.756 "name": "Existed_Raid", 00:07:53.756 "uuid": "dff3d245-3740-42af-b593-6bc68bf73c03", 00:07:53.756 "strip_size_kb": 0, 00:07:53.756 "state": "online", 00:07:53.756 "raid_level": "raid1", 00:07:53.756 "superblock": true, 00:07:53.756 "num_base_bdevs": 2, 00:07:53.756 "num_base_bdevs_discovered": 1, 00:07:53.756 "num_base_bdevs_operational": 1, 00:07:53.756 "base_bdevs_list": [ 00:07:53.756 { 00:07:53.756 "name": null, 00:07:53.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.756 "is_configured": false, 00:07:53.757 "data_offset": 0, 00:07:53.757 "data_size": 63488 00:07:53.757 }, 00:07:53.757 { 00:07:53.757 "name": "BaseBdev2", 00:07:53.757 "uuid": "0b99dc56-4520-4071-acad-e019644124ae", 00:07:53.757 "is_configured": true, 00:07:53.757 "data_offset": 2048, 00:07:53.757 "data_size": 63488 00:07:53.757 } 00:07:53.757 ] 00:07:53.757 }' 00:07:53.757 09:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.757 09:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.016 09:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:54.016 09:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:54.016 09:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.016 09:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:54.016 09:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.017 09:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.017 09:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.017 09:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:54.017 09:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:54.017 09:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:54.017 09:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.017 09:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.017 [2024-12-06 09:45:19.222750] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:54.017 [2024-12-06 09:45:19.222865] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:54.276 [2024-12-06 09:45:19.319577] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:54.276 [2024-12-06 09:45:19.319645] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:54.276 [2024-12-06 09:45:19.319657] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:54.276 09:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.276 09:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:54.276 09:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:54.276 09:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.276 09:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:54.276 09:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.276 09:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.276 09:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.276 09:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:54.276 09:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:54.276 09:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:54.276 09:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62913 00:07:54.276 09:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62913 ']' 00:07:54.276 09:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62913 00:07:54.276 09:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:54.276 09:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:54.276 09:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62913 00:07:54.276 killing process with pid 62913 00:07:54.276 09:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:54.276 09:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:54.276 09:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62913' 00:07:54.276 09:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62913 00:07:54.276 [2024-12-06 09:45:19.416224] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:54.276 09:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62913 00:07:54.276 [2024-12-06 09:45:19.432587] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:55.656 09:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:55.656 00:07:55.656 real 0m5.049s 00:07:55.656 user 0m7.289s 00:07:55.656 sys 0m0.797s 00:07:55.656 09:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.656 09:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.656 ************************************ 00:07:55.656 END TEST raid_state_function_test_sb 00:07:55.656 ************************************ 00:07:55.656 09:45:20 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:55.656 09:45:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:55.656 09:45:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.656 09:45:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:55.656 ************************************ 00:07:55.656 START TEST raid_superblock_test 00:07:55.656 ************************************ 00:07:55.656 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:07:55.656 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:07:55.656 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:55.656 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:55.656 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:55.656 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:55.656 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:55.656 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:55.656 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:55.656 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:55.656 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:55.656 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:55.656 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:55.656 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:55.656 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:07:55.656 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:07:55.656 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63165 00:07:55.656 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:55.656 09:45:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63165 00:07:55.656 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63165 ']' 00:07:55.656 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.656 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:55.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.656 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.656 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:55.656 09:45:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.656 [2024-12-06 09:45:20.733710] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:07:55.657 [2024-12-06 09:45:20.733835] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63165 ] 00:07:55.657 [2024-12-06 09:45:20.904504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.915 [2024-12-06 09:45:21.023317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.175 [2024-12-06 09:45:21.226884] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.175 [2024-12-06 09:45:21.226925] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.436 malloc1 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.436 [2024-12-06 09:45:21.627398] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:56.436 [2024-12-06 09:45:21.627459] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.436 [2024-12-06 09:45:21.627480] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:56.436 [2024-12-06 09:45:21.627489] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.436 [2024-12-06 09:45:21.629585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.436 [2024-12-06 09:45:21.629622] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:56.436 pt1 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.436 malloc2 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.436 [2024-12-06 09:45:21.681821] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:56.436 [2024-12-06 09:45:21.681879] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.436 [2024-12-06 09:45:21.681903] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:56.436 [2024-12-06 09:45:21.681913] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.436 [2024-12-06 09:45:21.684125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.436 [2024-12-06 09:45:21.684169] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:56.436 pt2 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.436 [2024-12-06 09:45:21.693845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:56.436 [2024-12-06 09:45:21.695610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:56.436 [2024-12-06 09:45:21.695776] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:56.436 [2024-12-06 09:45:21.695794] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:56.436 [2024-12-06 09:45:21.696030] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:56.436 [2024-12-06 09:45:21.696208] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:56.436 [2024-12-06 09:45:21.696231] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:56.436 [2024-12-06 09:45:21.696373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.436 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.697 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.697 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.697 "name": "raid_bdev1", 00:07:56.697 "uuid": "375c3e96-727a-46f1-aed7-c001d0feeba7", 00:07:56.697 "strip_size_kb": 0, 00:07:56.697 "state": "online", 00:07:56.697 "raid_level": "raid1", 00:07:56.697 "superblock": true, 00:07:56.697 "num_base_bdevs": 2, 00:07:56.697 "num_base_bdevs_discovered": 2, 00:07:56.697 "num_base_bdevs_operational": 2, 00:07:56.697 "base_bdevs_list": [ 00:07:56.697 { 00:07:56.697 "name": "pt1", 00:07:56.697 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:56.697 "is_configured": true, 00:07:56.697 "data_offset": 2048, 00:07:56.697 "data_size": 63488 00:07:56.697 }, 00:07:56.697 { 00:07:56.697 "name": "pt2", 00:07:56.697 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.697 "is_configured": true, 00:07:56.697 "data_offset": 2048, 00:07:56.697 "data_size": 63488 00:07:56.697 } 00:07:56.697 ] 00:07:56.697 }' 00:07:56.697 09:45:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.697 09:45:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.957 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:56.957 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:56.957 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:56.957 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:56.957 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:56.957 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:56.957 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:56.957 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.957 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.957 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:56.957 [2024-12-06 09:45:22.165332] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.957 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.957 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:56.957 "name": "raid_bdev1", 00:07:56.957 "aliases": [ 00:07:56.957 "375c3e96-727a-46f1-aed7-c001d0feeba7" 00:07:56.957 ], 00:07:56.957 "product_name": "Raid Volume", 00:07:56.957 "block_size": 512, 00:07:56.957 "num_blocks": 63488, 00:07:56.957 "uuid": "375c3e96-727a-46f1-aed7-c001d0feeba7", 00:07:56.957 "assigned_rate_limits": { 00:07:56.957 "rw_ios_per_sec": 0, 00:07:56.957 "rw_mbytes_per_sec": 0, 00:07:56.957 "r_mbytes_per_sec": 0, 00:07:56.957 "w_mbytes_per_sec": 0 00:07:56.957 }, 00:07:56.957 "claimed": false, 00:07:56.957 "zoned": false, 00:07:56.957 "supported_io_types": { 00:07:56.957 "read": true, 00:07:56.957 "write": true, 00:07:56.957 "unmap": false, 00:07:56.957 "flush": false, 00:07:56.957 "reset": true, 00:07:56.957 "nvme_admin": false, 00:07:56.957 "nvme_io": false, 00:07:56.957 "nvme_io_md": false, 00:07:56.957 "write_zeroes": true, 00:07:56.957 "zcopy": false, 00:07:56.957 "get_zone_info": false, 00:07:56.957 "zone_management": false, 00:07:56.957 "zone_append": false, 00:07:56.957 "compare": false, 00:07:56.957 "compare_and_write": false, 00:07:56.957 "abort": false, 00:07:56.957 "seek_hole": false, 00:07:56.957 "seek_data": false, 00:07:56.957 "copy": false, 00:07:56.957 "nvme_iov_md": false 00:07:56.957 }, 00:07:56.957 "memory_domains": [ 00:07:56.957 { 00:07:56.957 "dma_device_id": "system", 00:07:56.957 "dma_device_type": 1 00:07:56.957 }, 00:07:56.957 { 00:07:56.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.957 "dma_device_type": 2 00:07:56.957 }, 00:07:56.957 { 00:07:56.957 "dma_device_id": "system", 00:07:56.957 "dma_device_type": 1 00:07:56.957 }, 00:07:56.957 { 00:07:56.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.957 "dma_device_type": 2 00:07:56.957 } 00:07:56.957 ], 00:07:56.957 "driver_specific": { 00:07:56.957 "raid": { 00:07:56.957 "uuid": "375c3e96-727a-46f1-aed7-c001d0feeba7", 00:07:56.957 "strip_size_kb": 0, 00:07:56.957 "state": "online", 00:07:56.957 "raid_level": "raid1", 00:07:56.957 "superblock": true, 00:07:56.957 "num_base_bdevs": 2, 00:07:56.957 "num_base_bdevs_discovered": 2, 00:07:56.957 "num_base_bdevs_operational": 2, 00:07:56.957 "base_bdevs_list": [ 00:07:56.957 { 00:07:56.957 "name": "pt1", 00:07:56.957 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:56.957 "is_configured": true, 00:07:56.957 "data_offset": 2048, 00:07:56.957 "data_size": 63488 00:07:56.958 }, 00:07:56.958 { 00:07:56.958 "name": "pt2", 00:07:56.958 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.958 "is_configured": true, 00:07:56.958 "data_offset": 2048, 00:07:56.958 "data_size": 63488 00:07:56.958 } 00:07:56.958 ] 00:07:56.958 } 00:07:56.958 } 00:07:56.958 }' 00:07:56.958 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:57.218 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:57.218 pt2' 00:07:57.218 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.218 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:57.218 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.218 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:57.218 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.218 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.218 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.218 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.218 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.218 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.218 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.218 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:57.218 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.218 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.218 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.218 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.218 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.218 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.218 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:57.218 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.218 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.218 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:57.218 [2024-12-06 09:45:22.416866] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:57.218 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.218 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=375c3e96-727a-46f1-aed7-c001d0feeba7 00:07:57.218 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 375c3e96-727a-46f1-aed7-c001d0feeba7 ']' 00:07:57.218 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:57.218 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.218 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.218 [2024-12-06 09:45:22.464483] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:57.218 [2024-12-06 09:45:22.464513] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:57.218 [2024-12-06 09:45:22.464602] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:57.218 [2024-12-06 09:45:22.464663] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:57.218 [2024-12-06 09:45:22.464674] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:57.218 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.218 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.218 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.218 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.218 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:57.218 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.479 [2024-12-06 09:45:22.592297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:57.479 [2024-12-06 09:45:22.594276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:57.479 [2024-12-06 09:45:22.594343] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:57.479 [2024-12-06 09:45:22.594395] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:57.479 [2024-12-06 09:45:22.594409] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:57.479 [2024-12-06 09:45:22.594420] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:57.479 request: 00:07:57.479 { 00:07:57.479 "name": "raid_bdev1", 00:07:57.479 "raid_level": "raid1", 00:07:57.479 "base_bdevs": [ 00:07:57.479 "malloc1", 00:07:57.479 "malloc2" 00:07:57.479 ], 00:07:57.479 "superblock": false, 00:07:57.479 "method": "bdev_raid_create", 00:07:57.479 "req_id": 1 00:07:57.479 } 00:07:57.479 Got JSON-RPC error response 00:07:57.479 response: 00:07:57.479 { 00:07:57.479 "code": -17, 00:07:57.479 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:57.479 } 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.479 [2024-12-06 09:45:22.656177] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:57.479 [2024-12-06 09:45:22.656233] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:57.479 [2024-12-06 09:45:22.656252] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:57.479 [2024-12-06 09:45:22.656263] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:57.479 [2024-12-06 09:45:22.658454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:57.479 [2024-12-06 09:45:22.658491] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:57.479 [2024-12-06 09:45:22.658586] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:57.479 [2024-12-06 09:45:22.658645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:57.479 pt1 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.479 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.480 "name": "raid_bdev1", 00:07:57.480 "uuid": "375c3e96-727a-46f1-aed7-c001d0feeba7", 00:07:57.480 "strip_size_kb": 0, 00:07:57.480 "state": "configuring", 00:07:57.480 "raid_level": "raid1", 00:07:57.480 "superblock": true, 00:07:57.480 "num_base_bdevs": 2, 00:07:57.480 "num_base_bdevs_discovered": 1, 00:07:57.480 "num_base_bdevs_operational": 2, 00:07:57.480 "base_bdevs_list": [ 00:07:57.480 { 00:07:57.480 "name": "pt1", 00:07:57.480 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:57.480 "is_configured": true, 00:07:57.480 "data_offset": 2048, 00:07:57.480 "data_size": 63488 00:07:57.480 }, 00:07:57.480 { 00:07:57.480 "name": null, 00:07:57.480 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:57.480 "is_configured": false, 00:07:57.480 "data_offset": 2048, 00:07:57.480 "data_size": 63488 00:07:57.480 } 00:07:57.480 ] 00:07:57.480 }' 00:07:57.480 09:45:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.480 09:45:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.050 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:58.050 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:58.050 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:58.050 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:58.050 09:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.050 09:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.050 [2024-12-06 09:45:23.067503] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:58.050 [2024-12-06 09:45:23.067574] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.050 [2024-12-06 09:45:23.067596] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:58.050 [2024-12-06 09:45:23.067606] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.050 [2024-12-06 09:45:23.068075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.050 [2024-12-06 09:45:23.068111] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:58.050 [2024-12-06 09:45:23.068244] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:58.050 [2024-12-06 09:45:23.068279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:58.050 [2024-12-06 09:45:23.068426] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:58.050 [2024-12-06 09:45:23.068448] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:58.050 [2024-12-06 09:45:23.068720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:58.050 [2024-12-06 09:45:23.068903] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:58.050 [2024-12-06 09:45:23.068921] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:58.050 [2024-12-06 09:45:23.069082] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.050 pt2 00:07:58.050 09:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.050 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:58.050 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:58.050 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:58.050 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:58.050 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.050 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:58.050 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:58.050 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.050 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.050 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.050 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.050 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.050 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.050 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.050 09:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.050 09:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.050 09:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.050 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.050 "name": "raid_bdev1", 00:07:58.050 "uuid": "375c3e96-727a-46f1-aed7-c001d0feeba7", 00:07:58.050 "strip_size_kb": 0, 00:07:58.050 "state": "online", 00:07:58.050 "raid_level": "raid1", 00:07:58.050 "superblock": true, 00:07:58.050 "num_base_bdevs": 2, 00:07:58.050 "num_base_bdevs_discovered": 2, 00:07:58.050 "num_base_bdevs_operational": 2, 00:07:58.050 "base_bdevs_list": [ 00:07:58.050 { 00:07:58.050 "name": "pt1", 00:07:58.050 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:58.050 "is_configured": true, 00:07:58.050 "data_offset": 2048, 00:07:58.050 "data_size": 63488 00:07:58.050 }, 00:07:58.050 { 00:07:58.050 "name": "pt2", 00:07:58.050 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:58.050 "is_configured": true, 00:07:58.050 "data_offset": 2048, 00:07:58.050 "data_size": 63488 00:07:58.050 } 00:07:58.050 ] 00:07:58.050 }' 00:07:58.050 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.050 09:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.311 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:58.311 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:58.311 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:58.311 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:58.311 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:58.311 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:58.311 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:58.311 09:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.311 09:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.311 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:58.311 [2024-12-06 09:45:23.518946] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.311 09:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.311 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:58.311 "name": "raid_bdev1", 00:07:58.311 "aliases": [ 00:07:58.311 "375c3e96-727a-46f1-aed7-c001d0feeba7" 00:07:58.311 ], 00:07:58.311 "product_name": "Raid Volume", 00:07:58.311 "block_size": 512, 00:07:58.311 "num_blocks": 63488, 00:07:58.311 "uuid": "375c3e96-727a-46f1-aed7-c001d0feeba7", 00:07:58.311 "assigned_rate_limits": { 00:07:58.311 "rw_ios_per_sec": 0, 00:07:58.311 "rw_mbytes_per_sec": 0, 00:07:58.311 "r_mbytes_per_sec": 0, 00:07:58.311 "w_mbytes_per_sec": 0 00:07:58.311 }, 00:07:58.311 "claimed": false, 00:07:58.311 "zoned": false, 00:07:58.311 "supported_io_types": { 00:07:58.311 "read": true, 00:07:58.311 "write": true, 00:07:58.311 "unmap": false, 00:07:58.311 "flush": false, 00:07:58.311 "reset": true, 00:07:58.311 "nvme_admin": false, 00:07:58.311 "nvme_io": false, 00:07:58.311 "nvme_io_md": false, 00:07:58.311 "write_zeroes": true, 00:07:58.311 "zcopy": false, 00:07:58.311 "get_zone_info": false, 00:07:58.311 "zone_management": false, 00:07:58.311 "zone_append": false, 00:07:58.311 "compare": false, 00:07:58.311 "compare_and_write": false, 00:07:58.311 "abort": false, 00:07:58.311 "seek_hole": false, 00:07:58.311 "seek_data": false, 00:07:58.311 "copy": false, 00:07:58.311 "nvme_iov_md": false 00:07:58.311 }, 00:07:58.311 "memory_domains": [ 00:07:58.311 { 00:07:58.311 "dma_device_id": "system", 00:07:58.311 "dma_device_type": 1 00:07:58.311 }, 00:07:58.311 { 00:07:58.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.311 "dma_device_type": 2 00:07:58.311 }, 00:07:58.311 { 00:07:58.311 "dma_device_id": "system", 00:07:58.311 "dma_device_type": 1 00:07:58.311 }, 00:07:58.311 { 00:07:58.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.311 "dma_device_type": 2 00:07:58.311 } 00:07:58.311 ], 00:07:58.311 "driver_specific": { 00:07:58.311 "raid": { 00:07:58.311 "uuid": "375c3e96-727a-46f1-aed7-c001d0feeba7", 00:07:58.311 "strip_size_kb": 0, 00:07:58.311 "state": "online", 00:07:58.311 "raid_level": "raid1", 00:07:58.311 "superblock": true, 00:07:58.311 "num_base_bdevs": 2, 00:07:58.311 "num_base_bdevs_discovered": 2, 00:07:58.311 "num_base_bdevs_operational": 2, 00:07:58.311 "base_bdevs_list": [ 00:07:58.311 { 00:07:58.311 "name": "pt1", 00:07:58.311 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:58.311 "is_configured": true, 00:07:58.311 "data_offset": 2048, 00:07:58.311 "data_size": 63488 00:07:58.311 }, 00:07:58.311 { 00:07:58.311 "name": "pt2", 00:07:58.311 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:58.311 "is_configured": true, 00:07:58.311 "data_offset": 2048, 00:07:58.311 "data_size": 63488 00:07:58.311 } 00:07:58.311 ] 00:07:58.311 } 00:07:58.311 } 00:07:58.311 }' 00:07:58.311 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:58.573 pt2' 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.573 [2024-12-06 09:45:23.746573] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 375c3e96-727a-46f1-aed7-c001d0feeba7 '!=' 375c3e96-727a-46f1-aed7-c001d0feeba7 ']' 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.573 [2024-12-06 09:45:23.790264] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.573 09:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.833 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.833 "name": "raid_bdev1", 00:07:58.833 "uuid": "375c3e96-727a-46f1-aed7-c001d0feeba7", 00:07:58.833 "strip_size_kb": 0, 00:07:58.833 "state": "online", 00:07:58.833 "raid_level": "raid1", 00:07:58.833 "superblock": true, 00:07:58.833 "num_base_bdevs": 2, 00:07:58.833 "num_base_bdevs_discovered": 1, 00:07:58.833 "num_base_bdevs_operational": 1, 00:07:58.833 "base_bdevs_list": [ 00:07:58.833 { 00:07:58.833 "name": null, 00:07:58.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.833 "is_configured": false, 00:07:58.833 "data_offset": 0, 00:07:58.833 "data_size": 63488 00:07:58.833 }, 00:07:58.833 { 00:07:58.833 "name": "pt2", 00:07:58.833 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:58.833 "is_configured": true, 00:07:58.833 "data_offset": 2048, 00:07:58.833 "data_size": 63488 00:07:58.833 } 00:07:58.833 ] 00:07:58.833 }' 00:07:58.833 09:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.833 09:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.094 [2024-12-06 09:45:24.241484] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:59.094 [2024-12-06 09:45:24.241514] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:59.094 [2024-12-06 09:45:24.241591] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:59.094 [2024-12-06 09:45:24.241643] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:59.094 [2024-12-06 09:45:24.241659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.094 [2024-12-06 09:45:24.305344] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:59.094 [2024-12-06 09:45:24.305406] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:59.094 [2024-12-06 09:45:24.305423] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:59.094 [2024-12-06 09:45:24.305433] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:59.094 [2024-12-06 09:45:24.307518] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:59.094 [2024-12-06 09:45:24.307558] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:59.094 [2024-12-06 09:45:24.307636] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:59.094 [2024-12-06 09:45:24.307684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:59.094 [2024-12-06 09:45:24.307776] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:59.094 [2024-12-06 09:45:24.307796] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:59.094 [2024-12-06 09:45:24.308037] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:59.094 [2024-12-06 09:45:24.308214] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:59.094 [2024-12-06 09:45:24.308232] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:07:59.094 [2024-12-06 09:45:24.308374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:59.094 pt2 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.094 "name": "raid_bdev1", 00:07:59.094 "uuid": "375c3e96-727a-46f1-aed7-c001d0feeba7", 00:07:59.094 "strip_size_kb": 0, 00:07:59.094 "state": "online", 00:07:59.094 "raid_level": "raid1", 00:07:59.094 "superblock": true, 00:07:59.094 "num_base_bdevs": 2, 00:07:59.094 "num_base_bdevs_discovered": 1, 00:07:59.094 "num_base_bdevs_operational": 1, 00:07:59.094 "base_bdevs_list": [ 00:07:59.094 { 00:07:59.094 "name": null, 00:07:59.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.094 "is_configured": false, 00:07:59.094 "data_offset": 2048, 00:07:59.094 "data_size": 63488 00:07:59.094 }, 00:07:59.094 { 00:07:59.094 "name": "pt2", 00:07:59.094 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:59.094 "is_configured": true, 00:07:59.094 "data_offset": 2048, 00:07:59.094 "data_size": 63488 00:07:59.094 } 00:07:59.094 ] 00:07:59.094 }' 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.094 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.665 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:59.665 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.665 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.665 [2024-12-06 09:45:24.696685] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:59.665 [2024-12-06 09:45:24.696725] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:59.665 [2024-12-06 09:45:24.696807] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:59.665 [2024-12-06 09:45:24.696866] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:59.665 [2024-12-06 09:45:24.696881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:07:59.665 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.665 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.665 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.665 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.665 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:07:59.665 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.665 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:07:59.665 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:07:59.665 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:07:59.665 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:59.665 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.665 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.665 [2024-12-06 09:45:24.760579] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:59.665 [2024-12-06 09:45:24.760646] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:59.665 [2024-12-06 09:45:24.760664] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:07:59.665 [2024-12-06 09:45:24.760673] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:59.665 [2024-12-06 09:45:24.762847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:59.665 [2024-12-06 09:45:24.762884] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:59.665 [2024-12-06 09:45:24.762967] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:59.665 [2024-12-06 09:45:24.763014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:59.665 [2024-12-06 09:45:24.763180] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:07:59.665 [2024-12-06 09:45:24.763195] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:59.665 [2024-12-06 09:45:24.763211] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:07:59.665 [2024-12-06 09:45:24.763262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:59.665 [2024-12-06 09:45:24.763332] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:07:59.665 [2024-12-06 09:45:24.763344] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:59.665 [2024-12-06 09:45:24.763590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:59.665 [2024-12-06 09:45:24.763756] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:07:59.665 [2024-12-06 09:45:24.763777] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:07:59.665 [2024-12-06 09:45:24.763924] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:59.665 pt1 00:07:59.665 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.665 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:07:59.665 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:59.665 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:59.665 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:59.665 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:59.665 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:59.666 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:59.666 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.666 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.666 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.666 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.666 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.666 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:59.666 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.666 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.666 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.666 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.666 "name": "raid_bdev1", 00:07:59.666 "uuid": "375c3e96-727a-46f1-aed7-c001d0feeba7", 00:07:59.666 "strip_size_kb": 0, 00:07:59.666 "state": "online", 00:07:59.666 "raid_level": "raid1", 00:07:59.666 "superblock": true, 00:07:59.666 "num_base_bdevs": 2, 00:07:59.666 "num_base_bdevs_discovered": 1, 00:07:59.666 "num_base_bdevs_operational": 1, 00:07:59.666 "base_bdevs_list": [ 00:07:59.666 { 00:07:59.666 "name": null, 00:07:59.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.666 "is_configured": false, 00:07:59.666 "data_offset": 2048, 00:07:59.666 "data_size": 63488 00:07:59.666 }, 00:07:59.666 { 00:07:59.666 "name": "pt2", 00:07:59.666 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:59.666 "is_configured": true, 00:07:59.666 "data_offset": 2048, 00:07:59.666 "data_size": 63488 00:07:59.666 } 00:07:59.666 ] 00:07:59.666 }' 00:07:59.666 09:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.666 09:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.244 09:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:00.244 09:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.244 09:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:00.244 09:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.244 09:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.244 09:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:00.244 09:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:00.244 09:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:00.244 09:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.244 09:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.244 [2024-12-06 09:45:25.255969] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:00.244 09:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.244 09:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 375c3e96-727a-46f1-aed7-c001d0feeba7 '!=' 375c3e96-727a-46f1-aed7-c001d0feeba7 ']' 00:08:00.244 09:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63165 00:08:00.244 09:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63165 ']' 00:08:00.244 09:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63165 00:08:00.244 09:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:00.244 09:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:00.244 09:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63165 00:08:00.244 09:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:00.244 09:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:00.244 killing process with pid 63165 00:08:00.244 09:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63165' 00:08:00.244 09:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63165 00:08:00.244 [2024-12-06 09:45:25.337739] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:00.244 [2024-12-06 09:45:25.337848] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:00.244 [2024-12-06 09:45:25.337898] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:00.244 [2024-12-06 09:45:25.337913] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, sta 09:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63165 00:08:00.244 te offline 00:08:00.503 [2024-12-06 09:45:25.544710] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:01.442 09:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:01.442 00:08:01.442 real 0m5.999s 00:08:01.442 user 0m9.155s 00:08:01.442 sys 0m0.965s 00:08:01.442 09:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.442 09:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.442 ************************************ 00:08:01.442 END TEST raid_superblock_test 00:08:01.442 ************************************ 00:08:01.442 09:45:26 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:01.442 09:45:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:01.442 09:45:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.442 09:45:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:01.442 ************************************ 00:08:01.442 START TEST raid_read_error_test 00:08:01.442 ************************************ 00:08:01.442 09:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:08:01.442 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:01.442 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:01.442 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:01.442 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:01.442 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:01.442 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:01.442 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:01.442 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:01.442 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:01.442 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:01.442 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:01.442 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:01.442 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:01.702 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:01.702 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:01.702 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:01.702 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:01.702 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:01.702 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:01.702 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:01.702 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:01.702 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.GaBvT7HkF3 00:08:01.702 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63495 00:08:01.702 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63495 00:08:01.702 09:45:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:01.703 09:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63495 ']' 00:08:01.703 09:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.703 09:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.703 09:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.703 09:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.703 09:45:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.703 [2024-12-06 09:45:26.808914] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:01.703 [2024-12-06 09:45:26.809056] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63495 ] 00:08:01.963 [2024-12-06 09:45:26.976945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.963 [2024-12-06 09:45:27.093673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.223 [2024-12-06 09:45:27.289299] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.223 [2024-12-06 09:45:27.289374] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.483 09:45:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.483 09:45:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:02.483 09:45:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:02.483 09:45:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:02.483 09:45:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.483 09:45:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.483 BaseBdev1_malloc 00:08:02.483 09:45:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.483 09:45:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:02.483 09:45:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.483 09:45:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.483 true 00:08:02.483 09:45:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.483 09:45:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:02.483 09:45:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.483 09:45:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.483 [2024-12-06 09:45:27.683245] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:02.483 [2024-12-06 09:45:27.683304] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.483 [2024-12-06 09:45:27.683327] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:02.483 [2024-12-06 09:45:27.683338] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.483 [2024-12-06 09:45:27.685647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.483 [2024-12-06 09:45:27.685688] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:02.483 BaseBdev1 00:08:02.483 09:45:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.483 09:45:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:02.483 09:45:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:02.483 09:45:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.483 09:45:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.483 BaseBdev2_malloc 00:08:02.483 09:45:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.483 09:45:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:02.483 09:45:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.483 09:45:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.483 true 00:08:02.483 09:45:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.483 09:45:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:02.483 09:45:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.483 09:45:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.483 [2024-12-06 09:45:27.750115] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:02.483 [2024-12-06 09:45:27.750183] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.483 [2024-12-06 09:45:27.750199] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:02.483 [2024-12-06 09:45:27.750210] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.483 [2024-12-06 09:45:27.752261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.483 [2024-12-06 09:45:27.752299] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:02.483 BaseBdev2 00:08:02.742 09:45:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.742 09:45:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:02.742 09:45:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.742 09:45:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.742 [2024-12-06 09:45:27.762174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:02.742 [2024-12-06 09:45:27.763934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:02.742 [2024-12-06 09:45:27.764125] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:02.742 [2024-12-06 09:45:27.764157] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:02.742 [2024-12-06 09:45:27.764377] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:02.742 [2024-12-06 09:45:27.764549] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:02.742 [2024-12-06 09:45:27.764567] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:02.742 [2024-12-06 09:45:27.764713] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:02.742 09:45:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.742 09:45:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:02.742 09:45:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:02.742 09:45:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.742 09:45:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:02.742 09:45:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:02.742 09:45:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.742 09:45:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.742 09:45:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.742 09:45:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.742 09:45:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.742 09:45:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.742 09:45:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.742 09:45:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:02.742 09:45:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.742 09:45:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.742 09:45:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.742 "name": "raid_bdev1", 00:08:02.742 "uuid": "aaef28db-b4ff-42d3-af92-0a16a18e87f8", 00:08:02.742 "strip_size_kb": 0, 00:08:02.742 "state": "online", 00:08:02.742 "raid_level": "raid1", 00:08:02.742 "superblock": true, 00:08:02.742 "num_base_bdevs": 2, 00:08:02.742 "num_base_bdevs_discovered": 2, 00:08:02.742 "num_base_bdevs_operational": 2, 00:08:02.742 "base_bdevs_list": [ 00:08:02.742 { 00:08:02.742 "name": "BaseBdev1", 00:08:02.742 "uuid": "290a2e74-0fd5-58dc-84ce-f3a9c0884b8b", 00:08:02.742 "is_configured": true, 00:08:02.742 "data_offset": 2048, 00:08:02.742 "data_size": 63488 00:08:02.742 }, 00:08:02.742 { 00:08:02.742 "name": "BaseBdev2", 00:08:02.742 "uuid": "06d131cd-3051-59b0-88ef-8d910094b07d", 00:08:02.742 "is_configured": true, 00:08:02.742 "data_offset": 2048, 00:08:02.742 "data_size": 63488 00:08:02.742 } 00:08:02.742 ] 00:08:02.742 }' 00:08:02.742 09:45:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.742 09:45:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.001 09:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:03.001 09:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:03.260 [2024-12-06 09:45:28.290608] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:04.197 09:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:04.197 09:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.197 09:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.197 09:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.197 09:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:04.197 09:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:04.197 09:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:04.197 09:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:04.197 09:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:04.197 09:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:04.197 09:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.197 09:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:04.197 09:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:04.197 09:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.197 09:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.197 09:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.197 09:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.197 09:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.197 09:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.197 09:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:04.197 09:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.197 09:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.197 09:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.197 09:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.197 "name": "raid_bdev1", 00:08:04.197 "uuid": "aaef28db-b4ff-42d3-af92-0a16a18e87f8", 00:08:04.197 "strip_size_kb": 0, 00:08:04.197 "state": "online", 00:08:04.197 "raid_level": "raid1", 00:08:04.197 "superblock": true, 00:08:04.197 "num_base_bdevs": 2, 00:08:04.197 "num_base_bdevs_discovered": 2, 00:08:04.197 "num_base_bdevs_operational": 2, 00:08:04.197 "base_bdevs_list": [ 00:08:04.197 { 00:08:04.197 "name": "BaseBdev1", 00:08:04.197 "uuid": "290a2e74-0fd5-58dc-84ce-f3a9c0884b8b", 00:08:04.197 "is_configured": true, 00:08:04.197 "data_offset": 2048, 00:08:04.197 "data_size": 63488 00:08:04.197 }, 00:08:04.197 { 00:08:04.197 "name": "BaseBdev2", 00:08:04.197 "uuid": "06d131cd-3051-59b0-88ef-8d910094b07d", 00:08:04.197 "is_configured": true, 00:08:04.197 "data_offset": 2048, 00:08:04.197 "data_size": 63488 00:08:04.197 } 00:08:04.197 ] 00:08:04.197 }' 00:08:04.197 09:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.197 09:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.456 09:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:04.456 09:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.456 09:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.456 [2024-12-06 09:45:29.634102] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:04.456 [2024-12-06 09:45:29.634156] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:04.456 [2024-12-06 09:45:29.636865] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:04.456 [2024-12-06 09:45:29.636913] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.456 [2024-12-06 09:45:29.636993] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:04.456 [2024-12-06 09:45:29.637005] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:04.456 { 00:08:04.456 "results": [ 00:08:04.456 { 00:08:04.456 "job": "raid_bdev1", 00:08:04.456 "core_mask": "0x1", 00:08:04.456 "workload": "randrw", 00:08:04.456 "percentage": 50, 00:08:04.456 "status": "finished", 00:08:04.456 "queue_depth": 1, 00:08:04.456 "io_size": 131072, 00:08:04.456 "runtime": 1.344435, 00:08:04.456 "iops": 17884.836381082016, 00:08:04.456 "mibps": 2235.604547635252, 00:08:04.456 "io_failed": 0, 00:08:04.457 "io_timeout": 0, 00:08:04.457 "avg_latency_us": 53.28932966844372, 00:08:04.457 "min_latency_us": 23.14061135371179, 00:08:04.457 "max_latency_us": 1395.1441048034935 00:08:04.457 } 00:08:04.457 ], 00:08:04.457 "core_count": 1 00:08:04.457 } 00:08:04.457 09:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.457 09:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63495 00:08:04.457 09:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63495 ']' 00:08:04.457 09:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63495 00:08:04.457 09:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:04.457 09:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.457 09:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63495 00:08:04.457 09:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:04.457 09:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:04.457 killing process with pid 63495 00:08:04.457 09:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63495' 00:08:04.457 09:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63495 00:08:04.457 [2024-12-06 09:45:29.679734] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:04.457 09:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63495 00:08:04.714 [2024-12-06 09:45:29.812573] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:06.116 09:45:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:06.116 09:45:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.GaBvT7HkF3 00:08:06.116 09:45:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:06.116 09:45:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:06.116 09:45:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:06.116 09:45:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:06.116 09:45:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:06.116 09:45:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:06.116 00:08:06.116 real 0m4.266s 00:08:06.116 user 0m5.105s 00:08:06.116 sys 0m0.518s 00:08:06.116 09:45:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.116 09:45:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.116 ************************************ 00:08:06.116 END TEST raid_read_error_test 00:08:06.116 ************************************ 00:08:06.116 09:45:31 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:06.116 09:45:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:06.116 09:45:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.116 09:45:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:06.116 ************************************ 00:08:06.116 START TEST raid_write_error_test 00:08:06.116 ************************************ 00:08:06.116 09:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:08:06.116 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:06.116 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:06.116 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:06.116 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:06.116 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:06.116 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:06.116 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:06.116 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:06.116 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:06.116 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:06.116 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:06.116 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:06.116 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:06.116 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:06.116 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:06.116 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:06.116 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:06.116 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:06.116 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:06.116 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:06.116 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:06.116 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Kdfr21dZiF 00:08:06.116 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63635 00:08:06.116 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:06.116 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63635 00:08:06.116 09:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63635 ']' 00:08:06.116 09:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.116 09:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.116 09:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.116 09:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.116 09:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.116 [2024-12-06 09:45:31.143442] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:06.116 [2024-12-06 09:45:31.143565] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63635 ] 00:08:06.116 [2024-12-06 09:45:31.317626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.375 [2024-12-06 09:45:31.431164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.375 [2024-12-06 09:45:31.626316] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.375 [2024-12-06 09:45:31.626394] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.942 09:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:06.942 09:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:06.942 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:06.942 09:45:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:06.942 09:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.942 09:45:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.942 BaseBdev1_malloc 00:08:06.942 09:45:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.942 09:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:06.942 09:45:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.942 09:45:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.942 true 00:08:06.942 09:45:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.942 09:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:06.942 09:45:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.942 09:45:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.942 [2024-12-06 09:45:32.034934] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:06.942 [2024-12-06 09:45:32.034999] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:06.942 [2024-12-06 09:45:32.035019] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:06.942 [2024-12-06 09:45:32.035030] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:06.942 [2024-12-06 09:45:32.037034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:06.942 [2024-12-06 09:45:32.037074] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:06.942 BaseBdev1 00:08:06.942 09:45:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.942 09:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:06.942 09:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:06.942 09:45:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.942 09:45:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.942 BaseBdev2_malloc 00:08:06.942 09:45:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.942 09:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:06.942 09:45:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.942 09:45:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.942 true 00:08:06.942 09:45:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.942 09:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:06.942 09:45:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.942 09:45:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.943 [2024-12-06 09:45:32.102114] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:06.943 [2024-12-06 09:45:32.102240] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:06.943 [2024-12-06 09:45:32.102260] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:06.943 [2024-12-06 09:45:32.102271] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:06.943 [2024-12-06 09:45:32.104263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:06.943 [2024-12-06 09:45:32.104301] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:06.943 BaseBdev2 00:08:06.943 09:45:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.943 09:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:06.943 09:45:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.943 09:45:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.943 [2024-12-06 09:45:32.114161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:06.943 [2024-12-06 09:45:32.115931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:06.943 [2024-12-06 09:45:32.116177] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:06.943 [2024-12-06 09:45:32.116224] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:06.943 [2024-12-06 09:45:32.116464] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:06.943 [2024-12-06 09:45:32.116670] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:06.943 [2024-12-06 09:45:32.116713] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:06.943 [2024-12-06 09:45:32.116893] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:06.943 09:45:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.943 09:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:06.943 09:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:06.943 09:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:06.943 09:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:06.943 09:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:06.943 09:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:06.943 09:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.943 09:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.943 09:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.943 09:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.943 09:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.943 09:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:06.943 09:45:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.943 09:45:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.943 09:45:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.943 09:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.943 "name": "raid_bdev1", 00:08:06.943 "uuid": "dd7f1d4a-e07a-4844-85fe-5b0f57f121dd", 00:08:06.943 "strip_size_kb": 0, 00:08:06.943 "state": "online", 00:08:06.943 "raid_level": "raid1", 00:08:06.943 "superblock": true, 00:08:06.943 "num_base_bdevs": 2, 00:08:06.943 "num_base_bdevs_discovered": 2, 00:08:06.943 "num_base_bdevs_operational": 2, 00:08:06.943 "base_bdevs_list": [ 00:08:06.943 { 00:08:06.943 "name": "BaseBdev1", 00:08:06.943 "uuid": "63efb07c-8fac-5861-9839-2f3e7072f38f", 00:08:06.943 "is_configured": true, 00:08:06.943 "data_offset": 2048, 00:08:06.943 "data_size": 63488 00:08:06.943 }, 00:08:06.943 { 00:08:06.943 "name": "BaseBdev2", 00:08:06.943 "uuid": "cd939eaf-f275-5962-b712-35ca52c3bee5", 00:08:06.943 "is_configured": true, 00:08:06.943 "data_offset": 2048, 00:08:06.943 "data_size": 63488 00:08:06.943 } 00:08:06.943 ] 00:08:06.943 }' 00:08:06.943 09:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.943 09:45:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.510 09:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:07.510 09:45:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:07.510 [2024-12-06 09:45:32.674511] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:08.448 09:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:08.448 09:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.448 09:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.448 [2024-12-06 09:45:33.594158] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:08.448 [2024-12-06 09:45:33.594317] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:08.448 [2024-12-06 09:45:33.594539] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:08:08.448 09:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.448 09:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:08.448 09:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:08.448 09:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:08.448 09:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:08.448 09:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:08.448 09:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:08.448 09:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:08.448 09:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:08.448 09:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:08.448 09:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:08.448 09:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.448 09:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.448 09:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.448 09:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.448 09:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.448 09:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:08.448 09:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.448 09:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.448 09:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.448 09:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.448 "name": "raid_bdev1", 00:08:08.448 "uuid": "dd7f1d4a-e07a-4844-85fe-5b0f57f121dd", 00:08:08.448 "strip_size_kb": 0, 00:08:08.448 "state": "online", 00:08:08.448 "raid_level": "raid1", 00:08:08.448 "superblock": true, 00:08:08.448 "num_base_bdevs": 2, 00:08:08.448 "num_base_bdevs_discovered": 1, 00:08:08.448 "num_base_bdevs_operational": 1, 00:08:08.448 "base_bdevs_list": [ 00:08:08.448 { 00:08:08.448 "name": null, 00:08:08.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.448 "is_configured": false, 00:08:08.448 "data_offset": 0, 00:08:08.448 "data_size": 63488 00:08:08.448 }, 00:08:08.448 { 00:08:08.448 "name": "BaseBdev2", 00:08:08.448 "uuid": "cd939eaf-f275-5962-b712-35ca52c3bee5", 00:08:08.448 "is_configured": true, 00:08:08.448 "data_offset": 2048, 00:08:08.448 "data_size": 63488 00:08:08.448 } 00:08:08.448 ] 00:08:08.448 }' 00:08:08.448 09:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.448 09:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.015 09:45:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:09.015 09:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.015 09:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.015 [2024-12-06 09:45:34.059215] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:09.015 [2024-12-06 09:45:34.059303] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:09.015 [2024-12-06 09:45:34.061945] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:09.015 [2024-12-06 09:45:34.062026] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.015 [2024-12-06 09:45:34.062103] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:09.015 [2024-12-06 09:45:34.062158] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:09.015 { 00:08:09.015 "results": [ 00:08:09.015 { 00:08:09.015 "job": "raid_bdev1", 00:08:09.015 "core_mask": "0x1", 00:08:09.015 "workload": "randrw", 00:08:09.015 "percentage": 50, 00:08:09.015 "status": "finished", 00:08:09.015 "queue_depth": 1, 00:08:09.015 "io_size": 131072, 00:08:09.015 "runtime": 1.385667, 00:08:09.015 "iops": 20938.65264886874, 00:08:09.015 "mibps": 2617.3315811085927, 00:08:09.015 "io_failed": 0, 00:08:09.015 "io_timeout": 0, 00:08:09.015 "avg_latency_us": 45.125846609813124, 00:08:09.015 "min_latency_us": 22.805240174672488, 00:08:09.015 "max_latency_us": 1387.989519650655 00:08:09.015 } 00:08:09.015 ], 00:08:09.015 "core_count": 1 00:08:09.015 } 00:08:09.015 09:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.015 09:45:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63635 00:08:09.015 09:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63635 ']' 00:08:09.015 09:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63635 00:08:09.015 09:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:09.015 09:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.015 09:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63635 00:08:09.015 09:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:09.015 09:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:09.015 killing process with pid 63635 00:08:09.015 09:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63635' 00:08:09.015 09:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63635 00:08:09.015 [2024-12-06 09:45:34.095815] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:09.015 09:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63635 00:08:09.015 [2024-12-06 09:45:34.231760] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:10.392 09:45:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Kdfr21dZiF 00:08:10.392 09:45:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:10.393 09:45:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:10.393 09:45:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:10.393 09:45:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:10.393 09:45:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:10.393 09:45:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:10.393 09:45:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:10.393 00:08:10.393 real 0m4.368s 00:08:10.393 user 0m5.285s 00:08:10.393 sys 0m0.517s 00:08:10.393 09:45:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.393 ************************************ 00:08:10.393 END TEST raid_write_error_test 00:08:10.393 ************************************ 00:08:10.393 09:45:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.393 09:45:35 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:10.393 09:45:35 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:10.393 09:45:35 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:10.393 09:45:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:10.393 09:45:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.393 09:45:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:10.393 ************************************ 00:08:10.393 START TEST raid_state_function_test 00:08:10.393 ************************************ 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63779 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63779' 00:08:10.393 Process raid pid: 63779 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63779 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63779 ']' 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.393 09:45:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.393 [2024-12-06 09:45:35.578956] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:10.393 [2024-12-06 09:45:35.579161] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.652 [2024-12-06 09:45:35.734988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.652 [2024-12-06 09:45:35.846548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.911 [2024-12-06 09:45:36.052026] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.911 [2024-12-06 09:45:36.052174] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:11.182 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.182 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:11.182 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:11.182 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.182 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.182 [2024-12-06 09:45:36.421679] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:11.182 [2024-12-06 09:45:36.421781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:11.182 [2024-12-06 09:45:36.421812] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:11.182 [2024-12-06 09:45:36.421836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:11.182 [2024-12-06 09:45:36.421855] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:11.182 [2024-12-06 09:45:36.421877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:11.182 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.182 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:11.182 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.182 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.182 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.182 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.182 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.182 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.182 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.182 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.182 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.182 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.182 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.182 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.182 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.448 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.448 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.448 "name": "Existed_Raid", 00:08:11.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.448 "strip_size_kb": 64, 00:08:11.448 "state": "configuring", 00:08:11.448 "raid_level": "raid0", 00:08:11.448 "superblock": false, 00:08:11.448 "num_base_bdevs": 3, 00:08:11.448 "num_base_bdevs_discovered": 0, 00:08:11.448 "num_base_bdevs_operational": 3, 00:08:11.448 "base_bdevs_list": [ 00:08:11.448 { 00:08:11.448 "name": "BaseBdev1", 00:08:11.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.448 "is_configured": false, 00:08:11.448 "data_offset": 0, 00:08:11.448 "data_size": 0 00:08:11.448 }, 00:08:11.449 { 00:08:11.449 "name": "BaseBdev2", 00:08:11.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.449 "is_configured": false, 00:08:11.449 "data_offset": 0, 00:08:11.449 "data_size": 0 00:08:11.449 }, 00:08:11.449 { 00:08:11.449 "name": "BaseBdev3", 00:08:11.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.449 "is_configured": false, 00:08:11.449 "data_offset": 0, 00:08:11.449 "data_size": 0 00:08:11.449 } 00:08:11.449 ] 00:08:11.449 }' 00:08:11.449 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.449 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.708 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:11.708 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.708 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.708 [2024-12-06 09:45:36.920767] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:11.708 [2024-12-06 09:45:36.920851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:11.708 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.708 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:11.708 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.708 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.708 [2024-12-06 09:45:36.932728] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:11.708 [2024-12-06 09:45:36.932809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:11.708 [2024-12-06 09:45:36.932835] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:11.708 [2024-12-06 09:45:36.932857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:11.708 [2024-12-06 09:45:36.932875] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:11.708 [2024-12-06 09:45:36.932895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:11.708 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.708 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:11.708 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.708 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.708 [2024-12-06 09:45:36.979152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:11.968 BaseBdev1 00:08:11.968 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.968 09:45:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:11.968 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:11.968 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:11.968 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:11.968 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:11.968 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:11.968 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:11.968 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.968 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.968 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.968 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:11.968 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.968 09:45:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.968 [ 00:08:11.968 { 00:08:11.968 "name": "BaseBdev1", 00:08:11.968 "aliases": [ 00:08:11.968 "95bbb13d-99bb-4439-bd0a-be1a48b0d738" 00:08:11.968 ], 00:08:11.968 "product_name": "Malloc disk", 00:08:11.968 "block_size": 512, 00:08:11.968 "num_blocks": 65536, 00:08:11.968 "uuid": "95bbb13d-99bb-4439-bd0a-be1a48b0d738", 00:08:11.968 "assigned_rate_limits": { 00:08:11.968 "rw_ios_per_sec": 0, 00:08:11.968 "rw_mbytes_per_sec": 0, 00:08:11.968 "r_mbytes_per_sec": 0, 00:08:11.968 "w_mbytes_per_sec": 0 00:08:11.968 }, 00:08:11.968 "claimed": true, 00:08:11.968 "claim_type": "exclusive_write", 00:08:11.968 "zoned": false, 00:08:11.968 "supported_io_types": { 00:08:11.968 "read": true, 00:08:11.968 "write": true, 00:08:11.968 "unmap": true, 00:08:11.968 "flush": true, 00:08:11.968 "reset": true, 00:08:11.968 "nvme_admin": false, 00:08:11.968 "nvme_io": false, 00:08:11.968 "nvme_io_md": false, 00:08:11.968 "write_zeroes": true, 00:08:11.968 "zcopy": true, 00:08:11.968 "get_zone_info": false, 00:08:11.968 "zone_management": false, 00:08:11.968 "zone_append": false, 00:08:11.968 "compare": false, 00:08:11.968 "compare_and_write": false, 00:08:11.968 "abort": true, 00:08:11.968 "seek_hole": false, 00:08:11.968 "seek_data": false, 00:08:11.968 "copy": true, 00:08:11.968 "nvme_iov_md": false 00:08:11.968 }, 00:08:11.968 "memory_domains": [ 00:08:11.968 { 00:08:11.968 "dma_device_id": "system", 00:08:11.968 "dma_device_type": 1 00:08:11.968 }, 00:08:11.968 { 00:08:11.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.968 "dma_device_type": 2 00:08:11.968 } 00:08:11.968 ], 00:08:11.968 "driver_specific": {} 00:08:11.968 } 00:08:11.968 ] 00:08:11.968 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.968 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:11.968 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:11.968 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.968 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.968 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.968 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.968 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.968 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.968 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.968 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.968 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.968 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.968 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.968 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.968 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.968 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.968 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.968 "name": "Existed_Raid", 00:08:11.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.968 "strip_size_kb": 64, 00:08:11.968 "state": "configuring", 00:08:11.968 "raid_level": "raid0", 00:08:11.968 "superblock": false, 00:08:11.968 "num_base_bdevs": 3, 00:08:11.968 "num_base_bdevs_discovered": 1, 00:08:11.968 "num_base_bdevs_operational": 3, 00:08:11.968 "base_bdevs_list": [ 00:08:11.968 { 00:08:11.968 "name": "BaseBdev1", 00:08:11.968 "uuid": "95bbb13d-99bb-4439-bd0a-be1a48b0d738", 00:08:11.968 "is_configured": true, 00:08:11.968 "data_offset": 0, 00:08:11.968 "data_size": 65536 00:08:11.968 }, 00:08:11.968 { 00:08:11.968 "name": "BaseBdev2", 00:08:11.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.968 "is_configured": false, 00:08:11.968 "data_offset": 0, 00:08:11.968 "data_size": 0 00:08:11.968 }, 00:08:11.968 { 00:08:11.968 "name": "BaseBdev3", 00:08:11.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.969 "is_configured": false, 00:08:11.969 "data_offset": 0, 00:08:11.969 "data_size": 0 00:08:11.969 } 00:08:11.969 ] 00:08:11.969 }' 00:08:11.969 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.969 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.228 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:12.228 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.228 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.228 [2024-12-06 09:45:37.490337] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:12.228 [2024-12-06 09:45:37.490445] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:12.228 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.228 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:12.228 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.228 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.487 [2024-12-06 09:45:37.502384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:12.487 [2024-12-06 09:45:37.504312] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:12.487 [2024-12-06 09:45:37.504393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:12.487 [2024-12-06 09:45:37.504420] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:12.487 [2024-12-06 09:45:37.504442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:12.487 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.487 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:12.487 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:12.487 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:12.487 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.487 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:12.487 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.487 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.487 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:12.487 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.487 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.487 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.487 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.487 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.487 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.487 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.488 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.488 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.488 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.488 "name": "Existed_Raid", 00:08:12.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.488 "strip_size_kb": 64, 00:08:12.488 "state": "configuring", 00:08:12.488 "raid_level": "raid0", 00:08:12.488 "superblock": false, 00:08:12.488 "num_base_bdevs": 3, 00:08:12.488 "num_base_bdevs_discovered": 1, 00:08:12.488 "num_base_bdevs_operational": 3, 00:08:12.488 "base_bdevs_list": [ 00:08:12.488 { 00:08:12.488 "name": "BaseBdev1", 00:08:12.488 "uuid": "95bbb13d-99bb-4439-bd0a-be1a48b0d738", 00:08:12.488 "is_configured": true, 00:08:12.488 "data_offset": 0, 00:08:12.488 "data_size": 65536 00:08:12.488 }, 00:08:12.488 { 00:08:12.488 "name": "BaseBdev2", 00:08:12.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.488 "is_configured": false, 00:08:12.488 "data_offset": 0, 00:08:12.488 "data_size": 0 00:08:12.488 }, 00:08:12.488 { 00:08:12.488 "name": "BaseBdev3", 00:08:12.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.488 "is_configured": false, 00:08:12.488 "data_offset": 0, 00:08:12.488 "data_size": 0 00:08:12.488 } 00:08:12.488 ] 00:08:12.488 }' 00:08:12.488 09:45:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.488 09:45:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.748 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:12.748 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.748 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.009 [2024-12-06 09:45:38.043860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:13.009 BaseBdev2 00:08:13.009 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.009 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:13.009 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:13.009 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:13.009 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:13.009 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:13.009 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:13.009 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:13.009 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.009 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.009 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.009 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:13.009 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.009 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.009 [ 00:08:13.009 { 00:08:13.009 "name": "BaseBdev2", 00:08:13.009 "aliases": [ 00:08:13.009 "50e21d62-4242-41d0-9074-a0e4955a6526" 00:08:13.009 ], 00:08:13.009 "product_name": "Malloc disk", 00:08:13.009 "block_size": 512, 00:08:13.009 "num_blocks": 65536, 00:08:13.009 "uuid": "50e21d62-4242-41d0-9074-a0e4955a6526", 00:08:13.009 "assigned_rate_limits": { 00:08:13.009 "rw_ios_per_sec": 0, 00:08:13.009 "rw_mbytes_per_sec": 0, 00:08:13.009 "r_mbytes_per_sec": 0, 00:08:13.009 "w_mbytes_per_sec": 0 00:08:13.009 }, 00:08:13.009 "claimed": true, 00:08:13.009 "claim_type": "exclusive_write", 00:08:13.009 "zoned": false, 00:08:13.009 "supported_io_types": { 00:08:13.009 "read": true, 00:08:13.009 "write": true, 00:08:13.009 "unmap": true, 00:08:13.009 "flush": true, 00:08:13.009 "reset": true, 00:08:13.009 "nvme_admin": false, 00:08:13.009 "nvme_io": false, 00:08:13.009 "nvme_io_md": false, 00:08:13.009 "write_zeroes": true, 00:08:13.009 "zcopy": true, 00:08:13.009 "get_zone_info": false, 00:08:13.009 "zone_management": false, 00:08:13.009 "zone_append": false, 00:08:13.009 "compare": false, 00:08:13.009 "compare_and_write": false, 00:08:13.009 "abort": true, 00:08:13.009 "seek_hole": false, 00:08:13.009 "seek_data": false, 00:08:13.009 "copy": true, 00:08:13.009 "nvme_iov_md": false 00:08:13.009 }, 00:08:13.009 "memory_domains": [ 00:08:13.009 { 00:08:13.009 "dma_device_id": "system", 00:08:13.009 "dma_device_type": 1 00:08:13.009 }, 00:08:13.009 { 00:08:13.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.009 "dma_device_type": 2 00:08:13.009 } 00:08:13.009 ], 00:08:13.009 "driver_specific": {} 00:08:13.009 } 00:08:13.009 ] 00:08:13.009 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.009 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:13.009 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:13.009 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:13.009 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:13.009 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.009 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.009 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.009 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.009 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.009 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.009 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.010 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.010 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.010 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.010 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.010 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.010 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.010 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.010 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.010 "name": "Existed_Raid", 00:08:13.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.010 "strip_size_kb": 64, 00:08:13.010 "state": "configuring", 00:08:13.010 "raid_level": "raid0", 00:08:13.010 "superblock": false, 00:08:13.010 "num_base_bdevs": 3, 00:08:13.010 "num_base_bdevs_discovered": 2, 00:08:13.010 "num_base_bdevs_operational": 3, 00:08:13.010 "base_bdevs_list": [ 00:08:13.010 { 00:08:13.010 "name": "BaseBdev1", 00:08:13.010 "uuid": "95bbb13d-99bb-4439-bd0a-be1a48b0d738", 00:08:13.010 "is_configured": true, 00:08:13.010 "data_offset": 0, 00:08:13.010 "data_size": 65536 00:08:13.010 }, 00:08:13.010 { 00:08:13.010 "name": "BaseBdev2", 00:08:13.010 "uuid": "50e21d62-4242-41d0-9074-a0e4955a6526", 00:08:13.010 "is_configured": true, 00:08:13.010 "data_offset": 0, 00:08:13.010 "data_size": 65536 00:08:13.010 }, 00:08:13.010 { 00:08:13.010 "name": "BaseBdev3", 00:08:13.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.010 "is_configured": false, 00:08:13.010 "data_offset": 0, 00:08:13.010 "data_size": 0 00:08:13.010 } 00:08:13.010 ] 00:08:13.010 }' 00:08:13.010 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.010 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.270 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:13.270 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.270 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.531 [2024-12-06 09:45:38.570440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:13.531 [2024-12-06 09:45:38.570573] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:13.531 [2024-12-06 09:45:38.570607] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:13.531 [2024-12-06 09:45:38.570901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:13.531 [2024-12-06 09:45:38.571104] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:13.531 BaseBdev3 00:08:13.531 [2024-12-06 09:45:38.571167] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:13.531 [2024-12-06 09:45:38.571462] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.531 [ 00:08:13.531 { 00:08:13.531 "name": "BaseBdev3", 00:08:13.531 "aliases": [ 00:08:13.531 "9781524a-19fc-43ba-83f6-b0709b137818" 00:08:13.531 ], 00:08:13.531 "product_name": "Malloc disk", 00:08:13.531 "block_size": 512, 00:08:13.531 "num_blocks": 65536, 00:08:13.531 "uuid": "9781524a-19fc-43ba-83f6-b0709b137818", 00:08:13.531 "assigned_rate_limits": { 00:08:13.531 "rw_ios_per_sec": 0, 00:08:13.531 "rw_mbytes_per_sec": 0, 00:08:13.531 "r_mbytes_per_sec": 0, 00:08:13.531 "w_mbytes_per_sec": 0 00:08:13.531 }, 00:08:13.531 "claimed": true, 00:08:13.531 "claim_type": "exclusive_write", 00:08:13.531 "zoned": false, 00:08:13.531 "supported_io_types": { 00:08:13.531 "read": true, 00:08:13.531 "write": true, 00:08:13.531 "unmap": true, 00:08:13.531 "flush": true, 00:08:13.531 "reset": true, 00:08:13.531 "nvme_admin": false, 00:08:13.531 "nvme_io": false, 00:08:13.531 "nvme_io_md": false, 00:08:13.531 "write_zeroes": true, 00:08:13.531 "zcopy": true, 00:08:13.531 "get_zone_info": false, 00:08:13.531 "zone_management": false, 00:08:13.531 "zone_append": false, 00:08:13.531 "compare": false, 00:08:13.531 "compare_and_write": false, 00:08:13.531 "abort": true, 00:08:13.531 "seek_hole": false, 00:08:13.531 "seek_data": false, 00:08:13.531 "copy": true, 00:08:13.531 "nvme_iov_md": false 00:08:13.531 }, 00:08:13.531 "memory_domains": [ 00:08:13.531 { 00:08:13.531 "dma_device_id": "system", 00:08:13.531 "dma_device_type": 1 00:08:13.531 }, 00:08:13.531 { 00:08:13.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.531 "dma_device_type": 2 00:08:13.531 } 00:08:13.531 ], 00:08:13.531 "driver_specific": {} 00:08:13.531 } 00:08:13.531 ] 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.531 "name": "Existed_Raid", 00:08:13.531 "uuid": "0f095c79-c657-4b94-9434-8095f8a5b858", 00:08:13.531 "strip_size_kb": 64, 00:08:13.531 "state": "online", 00:08:13.531 "raid_level": "raid0", 00:08:13.531 "superblock": false, 00:08:13.531 "num_base_bdevs": 3, 00:08:13.531 "num_base_bdevs_discovered": 3, 00:08:13.531 "num_base_bdevs_operational": 3, 00:08:13.531 "base_bdevs_list": [ 00:08:13.531 { 00:08:13.531 "name": "BaseBdev1", 00:08:13.531 "uuid": "95bbb13d-99bb-4439-bd0a-be1a48b0d738", 00:08:13.531 "is_configured": true, 00:08:13.531 "data_offset": 0, 00:08:13.531 "data_size": 65536 00:08:13.531 }, 00:08:13.531 { 00:08:13.531 "name": "BaseBdev2", 00:08:13.531 "uuid": "50e21d62-4242-41d0-9074-a0e4955a6526", 00:08:13.531 "is_configured": true, 00:08:13.531 "data_offset": 0, 00:08:13.531 "data_size": 65536 00:08:13.531 }, 00:08:13.531 { 00:08:13.531 "name": "BaseBdev3", 00:08:13.531 "uuid": "9781524a-19fc-43ba-83f6-b0709b137818", 00:08:13.531 "is_configured": true, 00:08:13.531 "data_offset": 0, 00:08:13.531 "data_size": 65536 00:08:13.531 } 00:08:13.531 ] 00:08:13.531 }' 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.531 09:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.792 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:13.792 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:13.792 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:13.792 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:13.792 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:13.792 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:14.052 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:14.052 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:14.052 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.052 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.052 [2024-12-06 09:45:39.073948] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:14.052 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.052 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:14.052 "name": "Existed_Raid", 00:08:14.052 "aliases": [ 00:08:14.052 "0f095c79-c657-4b94-9434-8095f8a5b858" 00:08:14.052 ], 00:08:14.052 "product_name": "Raid Volume", 00:08:14.052 "block_size": 512, 00:08:14.052 "num_blocks": 196608, 00:08:14.052 "uuid": "0f095c79-c657-4b94-9434-8095f8a5b858", 00:08:14.052 "assigned_rate_limits": { 00:08:14.052 "rw_ios_per_sec": 0, 00:08:14.052 "rw_mbytes_per_sec": 0, 00:08:14.052 "r_mbytes_per_sec": 0, 00:08:14.052 "w_mbytes_per_sec": 0 00:08:14.052 }, 00:08:14.052 "claimed": false, 00:08:14.052 "zoned": false, 00:08:14.052 "supported_io_types": { 00:08:14.052 "read": true, 00:08:14.052 "write": true, 00:08:14.052 "unmap": true, 00:08:14.052 "flush": true, 00:08:14.052 "reset": true, 00:08:14.052 "nvme_admin": false, 00:08:14.052 "nvme_io": false, 00:08:14.052 "nvme_io_md": false, 00:08:14.052 "write_zeroes": true, 00:08:14.052 "zcopy": false, 00:08:14.052 "get_zone_info": false, 00:08:14.052 "zone_management": false, 00:08:14.052 "zone_append": false, 00:08:14.052 "compare": false, 00:08:14.052 "compare_and_write": false, 00:08:14.052 "abort": false, 00:08:14.052 "seek_hole": false, 00:08:14.052 "seek_data": false, 00:08:14.052 "copy": false, 00:08:14.052 "nvme_iov_md": false 00:08:14.052 }, 00:08:14.052 "memory_domains": [ 00:08:14.052 { 00:08:14.052 "dma_device_id": "system", 00:08:14.052 "dma_device_type": 1 00:08:14.052 }, 00:08:14.052 { 00:08:14.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.052 "dma_device_type": 2 00:08:14.052 }, 00:08:14.052 { 00:08:14.052 "dma_device_id": "system", 00:08:14.052 "dma_device_type": 1 00:08:14.052 }, 00:08:14.052 { 00:08:14.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.052 "dma_device_type": 2 00:08:14.052 }, 00:08:14.052 { 00:08:14.052 "dma_device_id": "system", 00:08:14.052 "dma_device_type": 1 00:08:14.052 }, 00:08:14.052 { 00:08:14.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.052 "dma_device_type": 2 00:08:14.052 } 00:08:14.052 ], 00:08:14.052 "driver_specific": { 00:08:14.052 "raid": { 00:08:14.052 "uuid": "0f095c79-c657-4b94-9434-8095f8a5b858", 00:08:14.052 "strip_size_kb": 64, 00:08:14.052 "state": "online", 00:08:14.052 "raid_level": "raid0", 00:08:14.052 "superblock": false, 00:08:14.052 "num_base_bdevs": 3, 00:08:14.052 "num_base_bdevs_discovered": 3, 00:08:14.052 "num_base_bdevs_operational": 3, 00:08:14.052 "base_bdevs_list": [ 00:08:14.052 { 00:08:14.052 "name": "BaseBdev1", 00:08:14.052 "uuid": "95bbb13d-99bb-4439-bd0a-be1a48b0d738", 00:08:14.052 "is_configured": true, 00:08:14.052 "data_offset": 0, 00:08:14.052 "data_size": 65536 00:08:14.052 }, 00:08:14.052 { 00:08:14.052 "name": "BaseBdev2", 00:08:14.052 "uuid": "50e21d62-4242-41d0-9074-a0e4955a6526", 00:08:14.052 "is_configured": true, 00:08:14.052 "data_offset": 0, 00:08:14.052 "data_size": 65536 00:08:14.052 }, 00:08:14.052 { 00:08:14.052 "name": "BaseBdev3", 00:08:14.052 "uuid": "9781524a-19fc-43ba-83f6-b0709b137818", 00:08:14.052 "is_configured": true, 00:08:14.052 "data_offset": 0, 00:08:14.052 "data_size": 65536 00:08:14.052 } 00:08:14.052 ] 00:08:14.052 } 00:08:14.052 } 00:08:14.052 }' 00:08:14.052 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:14.052 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:14.052 BaseBdev2 00:08:14.052 BaseBdev3' 00:08:14.052 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.052 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:14.052 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:14.052 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:14.052 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.052 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.052 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.052 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.052 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:14.052 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:14.052 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:14.052 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:14.052 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.052 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.053 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.053 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.053 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:14.053 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:14.053 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:14.053 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:14.053 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:14.053 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.053 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.053 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.053 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:14.053 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:14.053 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:14.053 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.053 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.313 [2024-12-06 09:45:39.325281] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:14.313 [2024-12-06 09:45:39.325310] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:14.313 [2024-12-06 09:45:39.325361] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:14.313 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.313 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:14.313 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:14.313 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:14.313 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:14.313 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:14.313 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:14.313 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.313 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:14.313 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.313 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.313 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:14.313 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.313 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.313 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.313 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.313 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.313 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.313 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.313 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.313 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.313 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.313 "name": "Existed_Raid", 00:08:14.313 "uuid": "0f095c79-c657-4b94-9434-8095f8a5b858", 00:08:14.313 "strip_size_kb": 64, 00:08:14.313 "state": "offline", 00:08:14.313 "raid_level": "raid0", 00:08:14.313 "superblock": false, 00:08:14.313 "num_base_bdevs": 3, 00:08:14.313 "num_base_bdevs_discovered": 2, 00:08:14.313 "num_base_bdevs_operational": 2, 00:08:14.313 "base_bdevs_list": [ 00:08:14.313 { 00:08:14.313 "name": null, 00:08:14.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.313 "is_configured": false, 00:08:14.313 "data_offset": 0, 00:08:14.313 "data_size": 65536 00:08:14.313 }, 00:08:14.313 { 00:08:14.313 "name": "BaseBdev2", 00:08:14.313 "uuid": "50e21d62-4242-41d0-9074-a0e4955a6526", 00:08:14.313 "is_configured": true, 00:08:14.313 "data_offset": 0, 00:08:14.313 "data_size": 65536 00:08:14.313 }, 00:08:14.313 { 00:08:14.313 "name": "BaseBdev3", 00:08:14.313 "uuid": "9781524a-19fc-43ba-83f6-b0709b137818", 00:08:14.313 "is_configured": true, 00:08:14.313 "data_offset": 0, 00:08:14.313 "data_size": 65536 00:08:14.313 } 00:08:14.313 ] 00:08:14.313 }' 00:08:14.313 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.313 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.883 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:14.883 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:14.883 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.883 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:14.883 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.883 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.883 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.883 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:14.883 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:14.883 09:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:14.883 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.883 09:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.883 [2024-12-06 09:45:39.935727] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:14.883 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.883 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:14.883 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:14.883 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.883 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.883 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.883 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:14.883 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.883 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:14.883 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:14.883 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:14.883 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.883 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.883 [2024-12-06 09:45:40.079184] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:14.883 [2024-12-06 09:45:40.079238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.144 BaseBdev2 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.144 [ 00:08:15.144 { 00:08:15.144 "name": "BaseBdev2", 00:08:15.144 "aliases": [ 00:08:15.144 "28170a07-1381-46ac-9e17-c4369fc2ce76" 00:08:15.144 ], 00:08:15.144 "product_name": "Malloc disk", 00:08:15.144 "block_size": 512, 00:08:15.144 "num_blocks": 65536, 00:08:15.144 "uuid": "28170a07-1381-46ac-9e17-c4369fc2ce76", 00:08:15.144 "assigned_rate_limits": { 00:08:15.144 "rw_ios_per_sec": 0, 00:08:15.144 "rw_mbytes_per_sec": 0, 00:08:15.144 "r_mbytes_per_sec": 0, 00:08:15.144 "w_mbytes_per_sec": 0 00:08:15.144 }, 00:08:15.144 "claimed": false, 00:08:15.144 "zoned": false, 00:08:15.144 "supported_io_types": { 00:08:15.144 "read": true, 00:08:15.144 "write": true, 00:08:15.144 "unmap": true, 00:08:15.144 "flush": true, 00:08:15.144 "reset": true, 00:08:15.144 "nvme_admin": false, 00:08:15.144 "nvme_io": false, 00:08:15.144 "nvme_io_md": false, 00:08:15.144 "write_zeroes": true, 00:08:15.144 "zcopy": true, 00:08:15.144 "get_zone_info": false, 00:08:15.144 "zone_management": false, 00:08:15.144 "zone_append": false, 00:08:15.144 "compare": false, 00:08:15.144 "compare_and_write": false, 00:08:15.144 "abort": true, 00:08:15.144 "seek_hole": false, 00:08:15.144 "seek_data": false, 00:08:15.144 "copy": true, 00:08:15.144 "nvme_iov_md": false 00:08:15.144 }, 00:08:15.144 "memory_domains": [ 00:08:15.144 { 00:08:15.144 "dma_device_id": "system", 00:08:15.144 "dma_device_type": 1 00:08:15.144 }, 00:08:15.144 { 00:08:15.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.144 "dma_device_type": 2 00:08:15.144 } 00:08:15.144 ], 00:08:15.144 "driver_specific": {} 00:08:15.144 } 00:08:15.144 ] 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.144 BaseBdev3 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:15.144 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:15.145 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.145 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.145 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.145 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:15.145 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.145 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.145 [ 00:08:15.145 { 00:08:15.145 "name": "BaseBdev3", 00:08:15.145 "aliases": [ 00:08:15.145 "3cecb1c6-358c-462c-96cb-d57e7a6bbc53" 00:08:15.145 ], 00:08:15.145 "product_name": "Malloc disk", 00:08:15.145 "block_size": 512, 00:08:15.145 "num_blocks": 65536, 00:08:15.145 "uuid": "3cecb1c6-358c-462c-96cb-d57e7a6bbc53", 00:08:15.145 "assigned_rate_limits": { 00:08:15.145 "rw_ios_per_sec": 0, 00:08:15.145 "rw_mbytes_per_sec": 0, 00:08:15.145 "r_mbytes_per_sec": 0, 00:08:15.145 "w_mbytes_per_sec": 0 00:08:15.145 }, 00:08:15.145 "claimed": false, 00:08:15.145 "zoned": false, 00:08:15.145 "supported_io_types": { 00:08:15.145 "read": true, 00:08:15.145 "write": true, 00:08:15.145 "unmap": true, 00:08:15.145 "flush": true, 00:08:15.145 "reset": true, 00:08:15.145 "nvme_admin": false, 00:08:15.145 "nvme_io": false, 00:08:15.145 "nvme_io_md": false, 00:08:15.145 "write_zeroes": true, 00:08:15.145 "zcopy": true, 00:08:15.145 "get_zone_info": false, 00:08:15.145 "zone_management": false, 00:08:15.145 "zone_append": false, 00:08:15.145 "compare": false, 00:08:15.145 "compare_and_write": false, 00:08:15.145 "abort": true, 00:08:15.145 "seek_hole": false, 00:08:15.145 "seek_data": false, 00:08:15.145 "copy": true, 00:08:15.145 "nvme_iov_md": false 00:08:15.145 }, 00:08:15.145 "memory_domains": [ 00:08:15.145 { 00:08:15.145 "dma_device_id": "system", 00:08:15.145 "dma_device_type": 1 00:08:15.145 }, 00:08:15.145 { 00:08:15.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.145 "dma_device_type": 2 00:08:15.145 } 00:08:15.145 ], 00:08:15.145 "driver_specific": {} 00:08:15.145 } 00:08:15.145 ] 00:08:15.145 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.145 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:15.145 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:15.145 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:15.145 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:15.145 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.145 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.145 [2024-12-06 09:45:40.390512] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:15.145 [2024-12-06 09:45:40.390557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:15.145 [2024-12-06 09:45:40.390577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:15.145 [2024-12-06 09:45:40.392356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:15.145 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.145 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.145 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.145 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.145 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.145 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.145 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.145 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.145 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.145 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.145 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.145 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.145 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.145 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.145 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.404 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.404 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.404 "name": "Existed_Raid", 00:08:15.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.404 "strip_size_kb": 64, 00:08:15.404 "state": "configuring", 00:08:15.404 "raid_level": "raid0", 00:08:15.404 "superblock": false, 00:08:15.404 "num_base_bdevs": 3, 00:08:15.404 "num_base_bdevs_discovered": 2, 00:08:15.404 "num_base_bdevs_operational": 3, 00:08:15.404 "base_bdevs_list": [ 00:08:15.404 { 00:08:15.405 "name": "BaseBdev1", 00:08:15.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.405 "is_configured": false, 00:08:15.405 "data_offset": 0, 00:08:15.405 "data_size": 0 00:08:15.405 }, 00:08:15.405 { 00:08:15.405 "name": "BaseBdev2", 00:08:15.405 "uuid": "28170a07-1381-46ac-9e17-c4369fc2ce76", 00:08:15.405 "is_configured": true, 00:08:15.405 "data_offset": 0, 00:08:15.405 "data_size": 65536 00:08:15.405 }, 00:08:15.405 { 00:08:15.405 "name": "BaseBdev3", 00:08:15.405 "uuid": "3cecb1c6-358c-462c-96cb-d57e7a6bbc53", 00:08:15.405 "is_configured": true, 00:08:15.405 "data_offset": 0, 00:08:15.405 "data_size": 65536 00:08:15.405 } 00:08:15.405 ] 00:08:15.405 }' 00:08:15.405 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.405 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.664 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:15.664 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.664 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.664 [2024-12-06 09:45:40.849771] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:15.664 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.664 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.664 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.664 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.664 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.664 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.664 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.665 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.665 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.665 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.665 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.665 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.665 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.665 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.665 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.665 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.665 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.665 "name": "Existed_Raid", 00:08:15.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.665 "strip_size_kb": 64, 00:08:15.665 "state": "configuring", 00:08:15.665 "raid_level": "raid0", 00:08:15.665 "superblock": false, 00:08:15.665 "num_base_bdevs": 3, 00:08:15.665 "num_base_bdevs_discovered": 1, 00:08:15.665 "num_base_bdevs_operational": 3, 00:08:15.665 "base_bdevs_list": [ 00:08:15.665 { 00:08:15.665 "name": "BaseBdev1", 00:08:15.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.665 "is_configured": false, 00:08:15.665 "data_offset": 0, 00:08:15.665 "data_size": 0 00:08:15.665 }, 00:08:15.665 { 00:08:15.665 "name": null, 00:08:15.665 "uuid": "28170a07-1381-46ac-9e17-c4369fc2ce76", 00:08:15.665 "is_configured": false, 00:08:15.665 "data_offset": 0, 00:08:15.665 "data_size": 65536 00:08:15.665 }, 00:08:15.665 { 00:08:15.665 "name": "BaseBdev3", 00:08:15.665 "uuid": "3cecb1c6-358c-462c-96cb-d57e7a6bbc53", 00:08:15.665 "is_configured": true, 00:08:15.665 "data_offset": 0, 00:08:15.665 "data_size": 65536 00:08:15.665 } 00:08:15.665 ] 00:08:15.665 }' 00:08:15.665 09:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.665 09:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.235 [2024-12-06 09:45:41.377591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:16.235 BaseBdev1 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.235 [ 00:08:16.235 { 00:08:16.235 "name": "BaseBdev1", 00:08:16.235 "aliases": [ 00:08:16.235 "3724bff1-af8c-4703-98c5-dbd7568cf156" 00:08:16.235 ], 00:08:16.235 "product_name": "Malloc disk", 00:08:16.235 "block_size": 512, 00:08:16.235 "num_blocks": 65536, 00:08:16.235 "uuid": "3724bff1-af8c-4703-98c5-dbd7568cf156", 00:08:16.235 "assigned_rate_limits": { 00:08:16.235 "rw_ios_per_sec": 0, 00:08:16.235 "rw_mbytes_per_sec": 0, 00:08:16.235 "r_mbytes_per_sec": 0, 00:08:16.235 "w_mbytes_per_sec": 0 00:08:16.235 }, 00:08:16.235 "claimed": true, 00:08:16.235 "claim_type": "exclusive_write", 00:08:16.235 "zoned": false, 00:08:16.235 "supported_io_types": { 00:08:16.235 "read": true, 00:08:16.235 "write": true, 00:08:16.235 "unmap": true, 00:08:16.235 "flush": true, 00:08:16.235 "reset": true, 00:08:16.235 "nvme_admin": false, 00:08:16.235 "nvme_io": false, 00:08:16.235 "nvme_io_md": false, 00:08:16.235 "write_zeroes": true, 00:08:16.235 "zcopy": true, 00:08:16.235 "get_zone_info": false, 00:08:16.235 "zone_management": false, 00:08:16.235 "zone_append": false, 00:08:16.235 "compare": false, 00:08:16.235 "compare_and_write": false, 00:08:16.235 "abort": true, 00:08:16.235 "seek_hole": false, 00:08:16.235 "seek_data": false, 00:08:16.235 "copy": true, 00:08:16.235 "nvme_iov_md": false 00:08:16.235 }, 00:08:16.235 "memory_domains": [ 00:08:16.235 { 00:08:16.235 "dma_device_id": "system", 00:08:16.235 "dma_device_type": 1 00:08:16.235 }, 00:08:16.235 { 00:08:16.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.235 "dma_device_type": 2 00:08:16.235 } 00:08:16.235 ], 00:08:16.235 "driver_specific": {} 00:08:16.235 } 00:08:16.235 ] 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.235 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.235 "name": "Existed_Raid", 00:08:16.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.235 "strip_size_kb": 64, 00:08:16.235 "state": "configuring", 00:08:16.235 "raid_level": "raid0", 00:08:16.235 "superblock": false, 00:08:16.235 "num_base_bdevs": 3, 00:08:16.235 "num_base_bdevs_discovered": 2, 00:08:16.235 "num_base_bdevs_operational": 3, 00:08:16.235 "base_bdevs_list": [ 00:08:16.235 { 00:08:16.235 "name": "BaseBdev1", 00:08:16.235 "uuid": "3724bff1-af8c-4703-98c5-dbd7568cf156", 00:08:16.235 "is_configured": true, 00:08:16.235 "data_offset": 0, 00:08:16.235 "data_size": 65536 00:08:16.235 }, 00:08:16.236 { 00:08:16.236 "name": null, 00:08:16.236 "uuid": "28170a07-1381-46ac-9e17-c4369fc2ce76", 00:08:16.236 "is_configured": false, 00:08:16.236 "data_offset": 0, 00:08:16.236 "data_size": 65536 00:08:16.236 }, 00:08:16.236 { 00:08:16.236 "name": "BaseBdev3", 00:08:16.236 "uuid": "3cecb1c6-358c-462c-96cb-d57e7a6bbc53", 00:08:16.236 "is_configured": true, 00:08:16.236 "data_offset": 0, 00:08:16.236 "data_size": 65536 00:08:16.236 } 00:08:16.236 ] 00:08:16.236 }' 00:08:16.236 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.236 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.805 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:16.805 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.805 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.805 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.805 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.805 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:16.805 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:16.805 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.805 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.805 [2024-12-06 09:45:41.928709] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:16.805 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.805 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:16.805 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.805 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.805 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.805 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.805 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.805 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.805 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.805 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.805 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.805 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.805 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.805 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.805 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.805 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.805 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.805 "name": "Existed_Raid", 00:08:16.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.805 "strip_size_kb": 64, 00:08:16.805 "state": "configuring", 00:08:16.805 "raid_level": "raid0", 00:08:16.805 "superblock": false, 00:08:16.805 "num_base_bdevs": 3, 00:08:16.805 "num_base_bdevs_discovered": 1, 00:08:16.805 "num_base_bdevs_operational": 3, 00:08:16.805 "base_bdevs_list": [ 00:08:16.805 { 00:08:16.805 "name": "BaseBdev1", 00:08:16.805 "uuid": "3724bff1-af8c-4703-98c5-dbd7568cf156", 00:08:16.805 "is_configured": true, 00:08:16.805 "data_offset": 0, 00:08:16.805 "data_size": 65536 00:08:16.806 }, 00:08:16.806 { 00:08:16.806 "name": null, 00:08:16.806 "uuid": "28170a07-1381-46ac-9e17-c4369fc2ce76", 00:08:16.806 "is_configured": false, 00:08:16.806 "data_offset": 0, 00:08:16.806 "data_size": 65536 00:08:16.806 }, 00:08:16.806 { 00:08:16.806 "name": null, 00:08:16.806 "uuid": "3cecb1c6-358c-462c-96cb-d57e7a6bbc53", 00:08:16.806 "is_configured": false, 00:08:16.806 "data_offset": 0, 00:08:16.806 "data_size": 65536 00:08:16.806 } 00:08:16.806 ] 00:08:16.806 }' 00:08:16.806 09:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.806 09:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.382 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.382 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:17.382 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.382 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.382 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.382 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:17.382 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:17.382 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.382 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.382 [2024-12-06 09:45:42.459843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:17.382 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.382 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:17.383 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.383 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:17.383 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:17.383 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.383 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:17.383 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.383 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.383 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.383 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.383 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.383 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.383 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.383 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.383 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.383 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.383 "name": "Existed_Raid", 00:08:17.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.383 "strip_size_kb": 64, 00:08:17.383 "state": "configuring", 00:08:17.383 "raid_level": "raid0", 00:08:17.383 "superblock": false, 00:08:17.383 "num_base_bdevs": 3, 00:08:17.383 "num_base_bdevs_discovered": 2, 00:08:17.383 "num_base_bdevs_operational": 3, 00:08:17.383 "base_bdevs_list": [ 00:08:17.383 { 00:08:17.383 "name": "BaseBdev1", 00:08:17.383 "uuid": "3724bff1-af8c-4703-98c5-dbd7568cf156", 00:08:17.383 "is_configured": true, 00:08:17.383 "data_offset": 0, 00:08:17.383 "data_size": 65536 00:08:17.383 }, 00:08:17.383 { 00:08:17.383 "name": null, 00:08:17.383 "uuid": "28170a07-1381-46ac-9e17-c4369fc2ce76", 00:08:17.383 "is_configured": false, 00:08:17.383 "data_offset": 0, 00:08:17.383 "data_size": 65536 00:08:17.383 }, 00:08:17.383 { 00:08:17.383 "name": "BaseBdev3", 00:08:17.383 "uuid": "3cecb1c6-358c-462c-96cb-d57e7a6bbc53", 00:08:17.383 "is_configured": true, 00:08:17.383 "data_offset": 0, 00:08:17.383 "data_size": 65536 00:08:17.383 } 00:08:17.383 ] 00:08:17.383 }' 00:08:17.383 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.383 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.954 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.954 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:17.954 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.954 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.954 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.954 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:17.954 09:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:17.954 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.954 09:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.954 [2024-12-06 09:45:42.966993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:17.954 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.954 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:17.954 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.954 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:17.954 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:17.954 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.954 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:17.954 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.954 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.954 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.954 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.954 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.954 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.954 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.954 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.954 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.954 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.954 "name": "Existed_Raid", 00:08:17.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.954 "strip_size_kb": 64, 00:08:17.954 "state": "configuring", 00:08:17.954 "raid_level": "raid0", 00:08:17.954 "superblock": false, 00:08:17.954 "num_base_bdevs": 3, 00:08:17.954 "num_base_bdevs_discovered": 1, 00:08:17.954 "num_base_bdevs_operational": 3, 00:08:17.954 "base_bdevs_list": [ 00:08:17.954 { 00:08:17.954 "name": null, 00:08:17.954 "uuid": "3724bff1-af8c-4703-98c5-dbd7568cf156", 00:08:17.955 "is_configured": false, 00:08:17.955 "data_offset": 0, 00:08:17.955 "data_size": 65536 00:08:17.955 }, 00:08:17.955 { 00:08:17.955 "name": null, 00:08:17.955 "uuid": "28170a07-1381-46ac-9e17-c4369fc2ce76", 00:08:17.955 "is_configured": false, 00:08:17.955 "data_offset": 0, 00:08:17.955 "data_size": 65536 00:08:17.955 }, 00:08:17.955 { 00:08:17.955 "name": "BaseBdev3", 00:08:17.955 "uuid": "3cecb1c6-358c-462c-96cb-d57e7a6bbc53", 00:08:17.955 "is_configured": true, 00:08:17.955 "data_offset": 0, 00:08:17.955 "data_size": 65536 00:08:17.955 } 00:08:17.955 ] 00:08:17.955 }' 00:08:17.955 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.955 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.524 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:18.524 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.524 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.524 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.524 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.524 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:18.524 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:18.524 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.524 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.524 [2024-12-06 09:45:43.561289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:18.524 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.524 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:18.524 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.524 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.524 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:18.524 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.524 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.524 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.524 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.524 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.524 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.524 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.525 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.525 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.525 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.525 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.525 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.525 "name": "Existed_Raid", 00:08:18.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.525 "strip_size_kb": 64, 00:08:18.525 "state": "configuring", 00:08:18.525 "raid_level": "raid0", 00:08:18.525 "superblock": false, 00:08:18.525 "num_base_bdevs": 3, 00:08:18.525 "num_base_bdevs_discovered": 2, 00:08:18.525 "num_base_bdevs_operational": 3, 00:08:18.525 "base_bdevs_list": [ 00:08:18.525 { 00:08:18.525 "name": null, 00:08:18.525 "uuid": "3724bff1-af8c-4703-98c5-dbd7568cf156", 00:08:18.525 "is_configured": false, 00:08:18.525 "data_offset": 0, 00:08:18.525 "data_size": 65536 00:08:18.525 }, 00:08:18.525 { 00:08:18.525 "name": "BaseBdev2", 00:08:18.525 "uuid": "28170a07-1381-46ac-9e17-c4369fc2ce76", 00:08:18.525 "is_configured": true, 00:08:18.525 "data_offset": 0, 00:08:18.525 "data_size": 65536 00:08:18.525 }, 00:08:18.525 { 00:08:18.525 "name": "BaseBdev3", 00:08:18.525 "uuid": "3cecb1c6-358c-462c-96cb-d57e7a6bbc53", 00:08:18.525 "is_configured": true, 00:08:18.525 "data_offset": 0, 00:08:18.525 "data_size": 65536 00:08:18.525 } 00:08:18.525 ] 00:08:18.525 }' 00:08:18.525 09:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.525 09:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.785 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:18.785 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.785 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.785 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.785 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.785 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:18.785 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.785 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.785 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.785 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:19.045 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.045 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3724bff1-af8c-4703-98c5-dbd7568cf156 00:08:19.045 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.045 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.045 [2024-12-06 09:45:44.128801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:19.045 [2024-12-06 09:45:44.128850] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:19.045 [2024-12-06 09:45:44.128876] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:19.045 [2024-12-06 09:45:44.129141] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:19.045 [2024-12-06 09:45:44.129324] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:19.045 [2024-12-06 09:45:44.129339] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:19.045 NewBaseBdev 00:08:19.045 [2024-12-06 09:45:44.129583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:19.045 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.045 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:19.045 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:19.045 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:19.045 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:19.046 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:19.046 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:19.046 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:19.046 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.046 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.046 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.046 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:19.046 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.046 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.046 [ 00:08:19.046 { 00:08:19.046 "name": "NewBaseBdev", 00:08:19.046 "aliases": [ 00:08:19.046 "3724bff1-af8c-4703-98c5-dbd7568cf156" 00:08:19.046 ], 00:08:19.046 "product_name": "Malloc disk", 00:08:19.046 "block_size": 512, 00:08:19.046 "num_blocks": 65536, 00:08:19.046 "uuid": "3724bff1-af8c-4703-98c5-dbd7568cf156", 00:08:19.046 "assigned_rate_limits": { 00:08:19.046 "rw_ios_per_sec": 0, 00:08:19.046 "rw_mbytes_per_sec": 0, 00:08:19.046 "r_mbytes_per_sec": 0, 00:08:19.046 "w_mbytes_per_sec": 0 00:08:19.046 }, 00:08:19.046 "claimed": true, 00:08:19.046 "claim_type": "exclusive_write", 00:08:19.046 "zoned": false, 00:08:19.046 "supported_io_types": { 00:08:19.046 "read": true, 00:08:19.046 "write": true, 00:08:19.046 "unmap": true, 00:08:19.046 "flush": true, 00:08:19.046 "reset": true, 00:08:19.046 "nvme_admin": false, 00:08:19.046 "nvme_io": false, 00:08:19.046 "nvme_io_md": false, 00:08:19.046 "write_zeroes": true, 00:08:19.046 "zcopy": true, 00:08:19.046 "get_zone_info": false, 00:08:19.046 "zone_management": false, 00:08:19.046 "zone_append": false, 00:08:19.046 "compare": false, 00:08:19.046 "compare_and_write": false, 00:08:19.046 "abort": true, 00:08:19.046 "seek_hole": false, 00:08:19.046 "seek_data": false, 00:08:19.046 "copy": true, 00:08:19.046 "nvme_iov_md": false 00:08:19.046 }, 00:08:19.046 "memory_domains": [ 00:08:19.046 { 00:08:19.046 "dma_device_id": "system", 00:08:19.046 "dma_device_type": 1 00:08:19.046 }, 00:08:19.046 { 00:08:19.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.046 "dma_device_type": 2 00:08:19.046 } 00:08:19.046 ], 00:08:19.046 "driver_specific": {} 00:08:19.046 } 00:08:19.046 ] 00:08:19.046 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.046 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:19.046 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:19.046 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.046 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:19.046 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.046 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.046 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.046 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.046 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.046 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.046 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.046 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.046 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.046 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.046 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.046 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.046 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.046 "name": "Existed_Raid", 00:08:19.046 "uuid": "dee431c2-4ba9-4743-a6aa-14fc9f790af3", 00:08:19.046 "strip_size_kb": 64, 00:08:19.046 "state": "online", 00:08:19.046 "raid_level": "raid0", 00:08:19.046 "superblock": false, 00:08:19.046 "num_base_bdevs": 3, 00:08:19.046 "num_base_bdevs_discovered": 3, 00:08:19.046 "num_base_bdevs_operational": 3, 00:08:19.046 "base_bdevs_list": [ 00:08:19.046 { 00:08:19.046 "name": "NewBaseBdev", 00:08:19.046 "uuid": "3724bff1-af8c-4703-98c5-dbd7568cf156", 00:08:19.046 "is_configured": true, 00:08:19.046 "data_offset": 0, 00:08:19.046 "data_size": 65536 00:08:19.046 }, 00:08:19.046 { 00:08:19.046 "name": "BaseBdev2", 00:08:19.046 "uuid": "28170a07-1381-46ac-9e17-c4369fc2ce76", 00:08:19.046 "is_configured": true, 00:08:19.046 "data_offset": 0, 00:08:19.046 "data_size": 65536 00:08:19.046 }, 00:08:19.046 { 00:08:19.046 "name": "BaseBdev3", 00:08:19.046 "uuid": "3cecb1c6-358c-462c-96cb-d57e7a6bbc53", 00:08:19.046 "is_configured": true, 00:08:19.046 "data_offset": 0, 00:08:19.046 "data_size": 65536 00:08:19.046 } 00:08:19.046 ] 00:08:19.046 }' 00:08:19.046 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.046 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.618 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:19.618 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:19.618 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:19.618 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:19.618 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:19.618 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:19.618 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:19.618 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:19.618 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.618 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.618 [2024-12-06 09:45:44.636323] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:19.618 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.618 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:19.618 "name": "Existed_Raid", 00:08:19.618 "aliases": [ 00:08:19.618 "dee431c2-4ba9-4743-a6aa-14fc9f790af3" 00:08:19.618 ], 00:08:19.618 "product_name": "Raid Volume", 00:08:19.618 "block_size": 512, 00:08:19.618 "num_blocks": 196608, 00:08:19.618 "uuid": "dee431c2-4ba9-4743-a6aa-14fc9f790af3", 00:08:19.618 "assigned_rate_limits": { 00:08:19.618 "rw_ios_per_sec": 0, 00:08:19.618 "rw_mbytes_per_sec": 0, 00:08:19.618 "r_mbytes_per_sec": 0, 00:08:19.618 "w_mbytes_per_sec": 0 00:08:19.618 }, 00:08:19.618 "claimed": false, 00:08:19.618 "zoned": false, 00:08:19.618 "supported_io_types": { 00:08:19.618 "read": true, 00:08:19.618 "write": true, 00:08:19.618 "unmap": true, 00:08:19.618 "flush": true, 00:08:19.618 "reset": true, 00:08:19.618 "nvme_admin": false, 00:08:19.618 "nvme_io": false, 00:08:19.618 "nvme_io_md": false, 00:08:19.618 "write_zeroes": true, 00:08:19.618 "zcopy": false, 00:08:19.618 "get_zone_info": false, 00:08:19.618 "zone_management": false, 00:08:19.618 "zone_append": false, 00:08:19.618 "compare": false, 00:08:19.618 "compare_and_write": false, 00:08:19.618 "abort": false, 00:08:19.618 "seek_hole": false, 00:08:19.618 "seek_data": false, 00:08:19.618 "copy": false, 00:08:19.618 "nvme_iov_md": false 00:08:19.618 }, 00:08:19.618 "memory_domains": [ 00:08:19.618 { 00:08:19.618 "dma_device_id": "system", 00:08:19.618 "dma_device_type": 1 00:08:19.618 }, 00:08:19.618 { 00:08:19.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.618 "dma_device_type": 2 00:08:19.618 }, 00:08:19.618 { 00:08:19.618 "dma_device_id": "system", 00:08:19.618 "dma_device_type": 1 00:08:19.618 }, 00:08:19.618 { 00:08:19.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.618 "dma_device_type": 2 00:08:19.618 }, 00:08:19.618 { 00:08:19.618 "dma_device_id": "system", 00:08:19.618 "dma_device_type": 1 00:08:19.618 }, 00:08:19.618 { 00:08:19.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.618 "dma_device_type": 2 00:08:19.618 } 00:08:19.618 ], 00:08:19.618 "driver_specific": { 00:08:19.618 "raid": { 00:08:19.618 "uuid": "dee431c2-4ba9-4743-a6aa-14fc9f790af3", 00:08:19.618 "strip_size_kb": 64, 00:08:19.618 "state": "online", 00:08:19.618 "raid_level": "raid0", 00:08:19.618 "superblock": false, 00:08:19.618 "num_base_bdevs": 3, 00:08:19.618 "num_base_bdevs_discovered": 3, 00:08:19.618 "num_base_bdevs_operational": 3, 00:08:19.618 "base_bdevs_list": [ 00:08:19.618 { 00:08:19.618 "name": "NewBaseBdev", 00:08:19.618 "uuid": "3724bff1-af8c-4703-98c5-dbd7568cf156", 00:08:19.618 "is_configured": true, 00:08:19.618 "data_offset": 0, 00:08:19.618 "data_size": 65536 00:08:19.618 }, 00:08:19.618 { 00:08:19.618 "name": "BaseBdev2", 00:08:19.618 "uuid": "28170a07-1381-46ac-9e17-c4369fc2ce76", 00:08:19.618 "is_configured": true, 00:08:19.619 "data_offset": 0, 00:08:19.619 "data_size": 65536 00:08:19.619 }, 00:08:19.619 { 00:08:19.619 "name": "BaseBdev3", 00:08:19.619 "uuid": "3cecb1c6-358c-462c-96cb-d57e7a6bbc53", 00:08:19.619 "is_configured": true, 00:08:19.619 "data_offset": 0, 00:08:19.619 "data_size": 65536 00:08:19.619 } 00:08:19.619 ] 00:08:19.619 } 00:08:19.619 } 00:08:19.619 }' 00:08:19.619 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:19.619 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:19.619 BaseBdev2 00:08:19.619 BaseBdev3' 00:08:19.619 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.619 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:19.619 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:19.619 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:19.619 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.619 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.619 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.619 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.619 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:19.619 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:19.619 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:19.619 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:19.619 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.619 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.619 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.619 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.619 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:19.619 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:19.619 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:19.619 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:19.619 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:19.619 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.619 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.619 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.619 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:19.619 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:19.619 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:19.619 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.619 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.619 [2024-12-06 09:45:44.887556] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:19.619 [2024-12-06 09:45:44.887588] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:19.619 [2024-12-06 09:45:44.887671] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:19.619 [2024-12-06 09:45:44.887728] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:19.619 [2024-12-06 09:45:44.887740] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:19.880 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.880 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63779 00:08:19.880 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63779 ']' 00:08:19.880 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63779 00:08:19.880 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:19.880 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:19.880 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63779 00:08:19.880 killing process with pid 63779 00:08:19.880 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:19.880 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:19.880 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63779' 00:08:19.880 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63779 00:08:19.880 [2024-12-06 09:45:44.920755] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:19.880 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63779 00:08:20.140 [2024-12-06 09:45:45.226856] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:21.521 ************************************ 00:08:21.521 END TEST raid_state_function_test 00:08:21.521 ************************************ 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:21.521 00:08:21.521 real 0m10.875s 00:08:21.521 user 0m17.451s 00:08:21.521 sys 0m1.809s 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.521 09:45:46 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:21.521 09:45:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:21.521 09:45:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.521 09:45:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:21.521 ************************************ 00:08:21.521 START TEST raid_state_function_test_sb 00:08:21.521 ************************************ 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64400 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:21.521 Process raid pid: 64400 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64400' 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64400 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64400 ']' 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.521 09:45:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.521 [2024-12-06 09:45:46.518288] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:21.521 [2024-12-06 09:45:46.518864] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.521 [2024-12-06 09:45:46.694435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.781 [2024-12-06 09:45:46.809246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.781 [2024-12-06 09:45:47.011897] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.781 [2024-12-06 09:45:47.011940] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.352 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.352 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:22.352 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:22.352 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.352 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.352 [2024-12-06 09:45:47.348254] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:22.352 [2024-12-06 09:45:47.348303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:22.353 [2024-12-06 09:45:47.348318] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:22.353 [2024-12-06 09:45:47.348328] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:22.353 [2024-12-06 09:45:47.348334] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:22.353 [2024-12-06 09:45:47.348343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:22.353 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.353 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:22.353 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.353 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.353 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.353 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.353 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.353 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.353 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.353 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.353 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.353 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.353 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.353 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.353 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.353 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.353 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.353 "name": "Existed_Raid", 00:08:22.353 "uuid": "d35e8a77-86a5-463c-acd0-e7fc8fcd8b3a", 00:08:22.353 "strip_size_kb": 64, 00:08:22.353 "state": "configuring", 00:08:22.353 "raid_level": "raid0", 00:08:22.353 "superblock": true, 00:08:22.353 "num_base_bdevs": 3, 00:08:22.353 "num_base_bdevs_discovered": 0, 00:08:22.353 "num_base_bdevs_operational": 3, 00:08:22.353 "base_bdevs_list": [ 00:08:22.353 { 00:08:22.353 "name": "BaseBdev1", 00:08:22.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.353 "is_configured": false, 00:08:22.353 "data_offset": 0, 00:08:22.353 "data_size": 0 00:08:22.353 }, 00:08:22.353 { 00:08:22.353 "name": "BaseBdev2", 00:08:22.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.353 "is_configured": false, 00:08:22.353 "data_offset": 0, 00:08:22.353 "data_size": 0 00:08:22.353 }, 00:08:22.353 { 00:08:22.353 "name": "BaseBdev3", 00:08:22.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.353 "is_configured": false, 00:08:22.353 "data_offset": 0, 00:08:22.353 "data_size": 0 00:08:22.353 } 00:08:22.353 ] 00:08:22.353 }' 00:08:22.353 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.353 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.613 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:22.613 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.613 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.613 [2024-12-06 09:45:47.767478] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:22.613 [2024-12-06 09:45:47.767518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:22.613 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.613 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:22.613 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.613 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.613 [2024-12-06 09:45:47.775489] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:22.613 [2024-12-06 09:45:47.775530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:22.613 [2024-12-06 09:45:47.775540] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:22.613 [2024-12-06 09:45:47.775549] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:22.613 [2024-12-06 09:45:47.775555] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:22.613 [2024-12-06 09:45:47.775564] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:22.613 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.613 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:22.613 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.613 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.613 [2024-12-06 09:45:47.818354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:22.613 BaseBdev1 00:08:22.613 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.613 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:22.613 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:22.613 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:22.613 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:22.613 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:22.613 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:22.613 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:22.613 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.613 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.613 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.613 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:22.613 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.613 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.613 [ 00:08:22.613 { 00:08:22.613 "name": "BaseBdev1", 00:08:22.613 "aliases": [ 00:08:22.613 "8ccde90b-9631-4fb7-859a-c9cb7d6b69b0" 00:08:22.613 ], 00:08:22.613 "product_name": "Malloc disk", 00:08:22.613 "block_size": 512, 00:08:22.613 "num_blocks": 65536, 00:08:22.613 "uuid": "8ccde90b-9631-4fb7-859a-c9cb7d6b69b0", 00:08:22.613 "assigned_rate_limits": { 00:08:22.613 "rw_ios_per_sec": 0, 00:08:22.613 "rw_mbytes_per_sec": 0, 00:08:22.613 "r_mbytes_per_sec": 0, 00:08:22.613 "w_mbytes_per_sec": 0 00:08:22.613 }, 00:08:22.613 "claimed": true, 00:08:22.613 "claim_type": "exclusive_write", 00:08:22.613 "zoned": false, 00:08:22.613 "supported_io_types": { 00:08:22.613 "read": true, 00:08:22.613 "write": true, 00:08:22.613 "unmap": true, 00:08:22.613 "flush": true, 00:08:22.613 "reset": true, 00:08:22.613 "nvme_admin": false, 00:08:22.613 "nvme_io": false, 00:08:22.613 "nvme_io_md": false, 00:08:22.613 "write_zeroes": true, 00:08:22.613 "zcopy": true, 00:08:22.613 "get_zone_info": false, 00:08:22.613 "zone_management": false, 00:08:22.613 "zone_append": false, 00:08:22.613 "compare": false, 00:08:22.613 "compare_and_write": false, 00:08:22.614 "abort": true, 00:08:22.614 "seek_hole": false, 00:08:22.614 "seek_data": false, 00:08:22.614 "copy": true, 00:08:22.614 "nvme_iov_md": false 00:08:22.614 }, 00:08:22.614 "memory_domains": [ 00:08:22.614 { 00:08:22.614 "dma_device_id": "system", 00:08:22.614 "dma_device_type": 1 00:08:22.614 }, 00:08:22.614 { 00:08:22.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.614 "dma_device_type": 2 00:08:22.614 } 00:08:22.614 ], 00:08:22.614 "driver_specific": {} 00:08:22.614 } 00:08:22.614 ] 00:08:22.614 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.614 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:22.614 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:22.614 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.614 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.614 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.614 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.614 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.614 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.614 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.614 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.614 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.614 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.614 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.614 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.614 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.614 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.874 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.874 "name": "Existed_Raid", 00:08:22.874 "uuid": "810f72bf-7f05-4c45-8e86-ce235b8991ee", 00:08:22.874 "strip_size_kb": 64, 00:08:22.874 "state": "configuring", 00:08:22.874 "raid_level": "raid0", 00:08:22.874 "superblock": true, 00:08:22.874 "num_base_bdevs": 3, 00:08:22.874 "num_base_bdevs_discovered": 1, 00:08:22.874 "num_base_bdevs_operational": 3, 00:08:22.874 "base_bdevs_list": [ 00:08:22.874 { 00:08:22.874 "name": "BaseBdev1", 00:08:22.874 "uuid": "8ccde90b-9631-4fb7-859a-c9cb7d6b69b0", 00:08:22.874 "is_configured": true, 00:08:22.874 "data_offset": 2048, 00:08:22.874 "data_size": 63488 00:08:22.874 }, 00:08:22.874 { 00:08:22.874 "name": "BaseBdev2", 00:08:22.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.874 "is_configured": false, 00:08:22.874 "data_offset": 0, 00:08:22.874 "data_size": 0 00:08:22.874 }, 00:08:22.874 { 00:08:22.874 "name": "BaseBdev3", 00:08:22.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.874 "is_configured": false, 00:08:22.874 "data_offset": 0, 00:08:22.874 "data_size": 0 00:08:22.874 } 00:08:22.874 ] 00:08:22.874 }' 00:08:22.874 09:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.874 09:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.134 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:23.134 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.134 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.134 [2024-12-06 09:45:48.317571] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:23.134 [2024-12-06 09:45:48.317631] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:23.134 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.134 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:23.134 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.134 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.134 [2024-12-06 09:45:48.325616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:23.134 [2024-12-06 09:45:48.327606] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:23.134 [2024-12-06 09:45:48.327644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:23.134 [2024-12-06 09:45:48.327654] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:23.134 [2024-12-06 09:45:48.327663] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:23.134 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.134 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:23.134 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:23.134 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:23.134 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.134 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.134 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.134 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.134 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.134 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.134 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.134 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.134 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.134 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.134 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.134 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.134 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.134 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.134 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.134 "name": "Existed_Raid", 00:08:23.134 "uuid": "ca9182de-5339-4ec3-b816-3737c0562772", 00:08:23.134 "strip_size_kb": 64, 00:08:23.134 "state": "configuring", 00:08:23.134 "raid_level": "raid0", 00:08:23.134 "superblock": true, 00:08:23.134 "num_base_bdevs": 3, 00:08:23.134 "num_base_bdevs_discovered": 1, 00:08:23.134 "num_base_bdevs_operational": 3, 00:08:23.134 "base_bdevs_list": [ 00:08:23.134 { 00:08:23.134 "name": "BaseBdev1", 00:08:23.134 "uuid": "8ccde90b-9631-4fb7-859a-c9cb7d6b69b0", 00:08:23.134 "is_configured": true, 00:08:23.134 "data_offset": 2048, 00:08:23.134 "data_size": 63488 00:08:23.134 }, 00:08:23.134 { 00:08:23.134 "name": "BaseBdev2", 00:08:23.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.134 "is_configured": false, 00:08:23.134 "data_offset": 0, 00:08:23.134 "data_size": 0 00:08:23.135 }, 00:08:23.135 { 00:08:23.135 "name": "BaseBdev3", 00:08:23.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.135 "is_configured": false, 00:08:23.135 "data_offset": 0, 00:08:23.135 "data_size": 0 00:08:23.135 } 00:08:23.135 ] 00:08:23.135 }' 00:08:23.135 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.135 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.801 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:23.801 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.801 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.801 [2024-12-06 09:45:48.762811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:23.801 BaseBdev2 00:08:23.801 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.801 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:23.801 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:23.801 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:23.801 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:23.801 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:23.801 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:23.801 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:23.801 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.801 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.801 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.801 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:23.801 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.801 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.801 [ 00:08:23.802 { 00:08:23.802 "name": "BaseBdev2", 00:08:23.802 "aliases": [ 00:08:23.802 "45b6d9ba-fcd7-414c-8aa9-ce9a3b2c2067" 00:08:23.802 ], 00:08:23.802 "product_name": "Malloc disk", 00:08:23.802 "block_size": 512, 00:08:23.802 "num_blocks": 65536, 00:08:23.802 "uuid": "45b6d9ba-fcd7-414c-8aa9-ce9a3b2c2067", 00:08:23.802 "assigned_rate_limits": { 00:08:23.802 "rw_ios_per_sec": 0, 00:08:23.802 "rw_mbytes_per_sec": 0, 00:08:23.802 "r_mbytes_per_sec": 0, 00:08:23.802 "w_mbytes_per_sec": 0 00:08:23.802 }, 00:08:23.802 "claimed": true, 00:08:23.802 "claim_type": "exclusive_write", 00:08:23.802 "zoned": false, 00:08:23.802 "supported_io_types": { 00:08:23.802 "read": true, 00:08:23.802 "write": true, 00:08:23.802 "unmap": true, 00:08:23.802 "flush": true, 00:08:23.802 "reset": true, 00:08:23.802 "nvme_admin": false, 00:08:23.802 "nvme_io": false, 00:08:23.802 "nvme_io_md": false, 00:08:23.802 "write_zeroes": true, 00:08:23.802 "zcopy": true, 00:08:23.802 "get_zone_info": false, 00:08:23.802 "zone_management": false, 00:08:23.802 "zone_append": false, 00:08:23.802 "compare": false, 00:08:23.802 "compare_and_write": false, 00:08:23.802 "abort": true, 00:08:23.802 "seek_hole": false, 00:08:23.802 "seek_data": false, 00:08:23.802 "copy": true, 00:08:23.802 "nvme_iov_md": false 00:08:23.802 }, 00:08:23.802 "memory_domains": [ 00:08:23.802 { 00:08:23.802 "dma_device_id": "system", 00:08:23.802 "dma_device_type": 1 00:08:23.802 }, 00:08:23.802 { 00:08:23.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.802 "dma_device_type": 2 00:08:23.802 } 00:08:23.802 ], 00:08:23.802 "driver_specific": {} 00:08:23.802 } 00:08:23.802 ] 00:08:23.802 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.802 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:23.802 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:23.802 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:23.802 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:23.802 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.802 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.802 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.802 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.802 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.802 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.802 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.802 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.802 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.802 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.802 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.802 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.802 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.802 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.802 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.802 "name": "Existed_Raid", 00:08:23.802 "uuid": "ca9182de-5339-4ec3-b816-3737c0562772", 00:08:23.802 "strip_size_kb": 64, 00:08:23.802 "state": "configuring", 00:08:23.802 "raid_level": "raid0", 00:08:23.802 "superblock": true, 00:08:23.802 "num_base_bdevs": 3, 00:08:23.802 "num_base_bdevs_discovered": 2, 00:08:23.802 "num_base_bdevs_operational": 3, 00:08:23.802 "base_bdevs_list": [ 00:08:23.802 { 00:08:23.802 "name": "BaseBdev1", 00:08:23.802 "uuid": "8ccde90b-9631-4fb7-859a-c9cb7d6b69b0", 00:08:23.802 "is_configured": true, 00:08:23.802 "data_offset": 2048, 00:08:23.802 "data_size": 63488 00:08:23.802 }, 00:08:23.802 { 00:08:23.802 "name": "BaseBdev2", 00:08:23.802 "uuid": "45b6d9ba-fcd7-414c-8aa9-ce9a3b2c2067", 00:08:23.802 "is_configured": true, 00:08:23.802 "data_offset": 2048, 00:08:23.802 "data_size": 63488 00:08:23.802 }, 00:08:23.802 { 00:08:23.802 "name": "BaseBdev3", 00:08:23.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.802 "is_configured": false, 00:08:23.802 "data_offset": 0, 00:08:23.802 "data_size": 0 00:08:23.802 } 00:08:23.802 ] 00:08:23.802 }' 00:08:23.802 09:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.802 09:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.089 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:24.089 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.089 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.089 [2024-12-06 09:45:49.342871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:24.089 [2024-12-06 09:45:49.343238] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:24.089 BaseBdev3 00:08:24.089 [2024-12-06 09:45:49.343286] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:24.089 [2024-12-06 09:45:49.343556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:24.089 [2024-12-06 09:45:49.343717] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:24.089 [2024-12-06 09:45:49.343729] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:24.089 [2024-12-06 09:45:49.343863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.089 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.089 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:24.089 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:24.089 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:24.089 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:24.089 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:24.089 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:24.089 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:24.089 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.089 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.089 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.089 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:24.089 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.349 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.349 [ 00:08:24.349 { 00:08:24.349 "name": "BaseBdev3", 00:08:24.349 "aliases": [ 00:08:24.349 "99998d16-ae18-4edd-bcc5-c89c3c2c18df" 00:08:24.349 ], 00:08:24.349 "product_name": "Malloc disk", 00:08:24.349 "block_size": 512, 00:08:24.349 "num_blocks": 65536, 00:08:24.349 "uuid": "99998d16-ae18-4edd-bcc5-c89c3c2c18df", 00:08:24.349 "assigned_rate_limits": { 00:08:24.349 "rw_ios_per_sec": 0, 00:08:24.349 "rw_mbytes_per_sec": 0, 00:08:24.349 "r_mbytes_per_sec": 0, 00:08:24.349 "w_mbytes_per_sec": 0 00:08:24.349 }, 00:08:24.349 "claimed": true, 00:08:24.349 "claim_type": "exclusive_write", 00:08:24.349 "zoned": false, 00:08:24.349 "supported_io_types": { 00:08:24.350 "read": true, 00:08:24.350 "write": true, 00:08:24.350 "unmap": true, 00:08:24.350 "flush": true, 00:08:24.350 "reset": true, 00:08:24.350 "nvme_admin": false, 00:08:24.350 "nvme_io": false, 00:08:24.350 "nvme_io_md": false, 00:08:24.350 "write_zeroes": true, 00:08:24.350 "zcopy": true, 00:08:24.350 "get_zone_info": false, 00:08:24.350 "zone_management": false, 00:08:24.350 "zone_append": false, 00:08:24.350 "compare": false, 00:08:24.350 "compare_and_write": false, 00:08:24.350 "abort": true, 00:08:24.350 "seek_hole": false, 00:08:24.350 "seek_data": false, 00:08:24.350 "copy": true, 00:08:24.350 "nvme_iov_md": false 00:08:24.350 }, 00:08:24.350 "memory_domains": [ 00:08:24.350 { 00:08:24.350 "dma_device_id": "system", 00:08:24.350 "dma_device_type": 1 00:08:24.350 }, 00:08:24.350 { 00:08:24.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.350 "dma_device_type": 2 00:08:24.350 } 00:08:24.350 ], 00:08:24.350 "driver_specific": {} 00:08:24.350 } 00:08:24.350 ] 00:08:24.350 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.350 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:24.350 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:24.350 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:24.350 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:24.350 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.350 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:24.350 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.350 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.350 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.350 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.350 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.350 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.350 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.350 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.350 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.350 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.350 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.350 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.350 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.350 "name": "Existed_Raid", 00:08:24.350 "uuid": "ca9182de-5339-4ec3-b816-3737c0562772", 00:08:24.350 "strip_size_kb": 64, 00:08:24.350 "state": "online", 00:08:24.350 "raid_level": "raid0", 00:08:24.350 "superblock": true, 00:08:24.350 "num_base_bdevs": 3, 00:08:24.350 "num_base_bdevs_discovered": 3, 00:08:24.350 "num_base_bdevs_operational": 3, 00:08:24.350 "base_bdevs_list": [ 00:08:24.350 { 00:08:24.350 "name": "BaseBdev1", 00:08:24.350 "uuid": "8ccde90b-9631-4fb7-859a-c9cb7d6b69b0", 00:08:24.350 "is_configured": true, 00:08:24.350 "data_offset": 2048, 00:08:24.350 "data_size": 63488 00:08:24.350 }, 00:08:24.350 { 00:08:24.350 "name": "BaseBdev2", 00:08:24.350 "uuid": "45b6d9ba-fcd7-414c-8aa9-ce9a3b2c2067", 00:08:24.350 "is_configured": true, 00:08:24.350 "data_offset": 2048, 00:08:24.350 "data_size": 63488 00:08:24.350 }, 00:08:24.350 { 00:08:24.350 "name": "BaseBdev3", 00:08:24.350 "uuid": "99998d16-ae18-4edd-bcc5-c89c3c2c18df", 00:08:24.350 "is_configured": true, 00:08:24.350 "data_offset": 2048, 00:08:24.350 "data_size": 63488 00:08:24.350 } 00:08:24.350 ] 00:08:24.350 }' 00:08:24.350 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.350 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.610 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:24.610 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:24.610 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:24.610 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:24.610 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:24.610 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:24.610 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:24.610 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:24.610 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.610 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.610 [2024-12-06 09:45:49.858373] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:24.870 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.870 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:24.870 "name": "Existed_Raid", 00:08:24.870 "aliases": [ 00:08:24.870 "ca9182de-5339-4ec3-b816-3737c0562772" 00:08:24.870 ], 00:08:24.870 "product_name": "Raid Volume", 00:08:24.870 "block_size": 512, 00:08:24.870 "num_blocks": 190464, 00:08:24.870 "uuid": "ca9182de-5339-4ec3-b816-3737c0562772", 00:08:24.870 "assigned_rate_limits": { 00:08:24.870 "rw_ios_per_sec": 0, 00:08:24.870 "rw_mbytes_per_sec": 0, 00:08:24.870 "r_mbytes_per_sec": 0, 00:08:24.870 "w_mbytes_per_sec": 0 00:08:24.870 }, 00:08:24.870 "claimed": false, 00:08:24.870 "zoned": false, 00:08:24.870 "supported_io_types": { 00:08:24.870 "read": true, 00:08:24.870 "write": true, 00:08:24.870 "unmap": true, 00:08:24.870 "flush": true, 00:08:24.870 "reset": true, 00:08:24.870 "nvme_admin": false, 00:08:24.870 "nvme_io": false, 00:08:24.870 "nvme_io_md": false, 00:08:24.870 "write_zeroes": true, 00:08:24.870 "zcopy": false, 00:08:24.870 "get_zone_info": false, 00:08:24.870 "zone_management": false, 00:08:24.870 "zone_append": false, 00:08:24.870 "compare": false, 00:08:24.870 "compare_and_write": false, 00:08:24.870 "abort": false, 00:08:24.870 "seek_hole": false, 00:08:24.870 "seek_data": false, 00:08:24.870 "copy": false, 00:08:24.870 "nvme_iov_md": false 00:08:24.870 }, 00:08:24.870 "memory_domains": [ 00:08:24.870 { 00:08:24.870 "dma_device_id": "system", 00:08:24.870 "dma_device_type": 1 00:08:24.870 }, 00:08:24.870 { 00:08:24.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.870 "dma_device_type": 2 00:08:24.870 }, 00:08:24.870 { 00:08:24.870 "dma_device_id": "system", 00:08:24.870 "dma_device_type": 1 00:08:24.870 }, 00:08:24.870 { 00:08:24.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.870 "dma_device_type": 2 00:08:24.870 }, 00:08:24.870 { 00:08:24.870 "dma_device_id": "system", 00:08:24.870 "dma_device_type": 1 00:08:24.870 }, 00:08:24.870 { 00:08:24.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.870 "dma_device_type": 2 00:08:24.870 } 00:08:24.870 ], 00:08:24.870 "driver_specific": { 00:08:24.870 "raid": { 00:08:24.870 "uuid": "ca9182de-5339-4ec3-b816-3737c0562772", 00:08:24.870 "strip_size_kb": 64, 00:08:24.870 "state": "online", 00:08:24.870 "raid_level": "raid0", 00:08:24.870 "superblock": true, 00:08:24.870 "num_base_bdevs": 3, 00:08:24.870 "num_base_bdevs_discovered": 3, 00:08:24.870 "num_base_bdevs_operational": 3, 00:08:24.870 "base_bdevs_list": [ 00:08:24.870 { 00:08:24.870 "name": "BaseBdev1", 00:08:24.870 "uuid": "8ccde90b-9631-4fb7-859a-c9cb7d6b69b0", 00:08:24.870 "is_configured": true, 00:08:24.870 "data_offset": 2048, 00:08:24.870 "data_size": 63488 00:08:24.870 }, 00:08:24.870 { 00:08:24.870 "name": "BaseBdev2", 00:08:24.870 "uuid": "45b6d9ba-fcd7-414c-8aa9-ce9a3b2c2067", 00:08:24.870 "is_configured": true, 00:08:24.870 "data_offset": 2048, 00:08:24.870 "data_size": 63488 00:08:24.870 }, 00:08:24.870 { 00:08:24.870 "name": "BaseBdev3", 00:08:24.870 "uuid": "99998d16-ae18-4edd-bcc5-c89c3c2c18df", 00:08:24.870 "is_configured": true, 00:08:24.870 "data_offset": 2048, 00:08:24.870 "data_size": 63488 00:08:24.870 } 00:08:24.870 ] 00:08:24.870 } 00:08:24.870 } 00:08:24.870 }' 00:08:24.870 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:24.870 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:24.870 BaseBdev2 00:08:24.870 BaseBdev3' 00:08:24.870 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.870 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:24.870 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.870 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:24.870 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.870 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.870 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.870 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.870 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.870 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.870 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.870 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:24.870 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.870 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.870 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.870 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.870 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.870 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.870 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.870 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:24.870 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.870 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.870 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.870 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.131 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.131 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.131 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:25.131 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.131 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.131 [2024-12-06 09:45:50.153567] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:25.131 [2024-12-06 09:45:50.153636] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:25.131 [2024-12-06 09:45:50.153709] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:25.131 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.131 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:25.131 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:25.131 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:25.131 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:25.131 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:25.131 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:25.131 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.131 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:25.131 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.131 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.131 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:25.131 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.131 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.131 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.131 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.131 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.131 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.131 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.131 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.131 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.131 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.131 "name": "Existed_Raid", 00:08:25.131 "uuid": "ca9182de-5339-4ec3-b816-3737c0562772", 00:08:25.131 "strip_size_kb": 64, 00:08:25.131 "state": "offline", 00:08:25.131 "raid_level": "raid0", 00:08:25.131 "superblock": true, 00:08:25.131 "num_base_bdevs": 3, 00:08:25.131 "num_base_bdevs_discovered": 2, 00:08:25.131 "num_base_bdevs_operational": 2, 00:08:25.131 "base_bdevs_list": [ 00:08:25.131 { 00:08:25.131 "name": null, 00:08:25.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.131 "is_configured": false, 00:08:25.131 "data_offset": 0, 00:08:25.131 "data_size": 63488 00:08:25.131 }, 00:08:25.131 { 00:08:25.131 "name": "BaseBdev2", 00:08:25.131 "uuid": "45b6d9ba-fcd7-414c-8aa9-ce9a3b2c2067", 00:08:25.131 "is_configured": true, 00:08:25.131 "data_offset": 2048, 00:08:25.131 "data_size": 63488 00:08:25.131 }, 00:08:25.131 { 00:08:25.131 "name": "BaseBdev3", 00:08:25.131 "uuid": "99998d16-ae18-4edd-bcc5-c89c3c2c18df", 00:08:25.131 "is_configured": true, 00:08:25.131 "data_offset": 2048, 00:08:25.131 "data_size": 63488 00:08:25.131 } 00:08:25.131 ] 00:08:25.131 }' 00:08:25.131 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.131 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.391 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:25.391 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:25.391 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.391 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:25.391 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.391 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.650 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.650 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:25.650 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:25.650 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:25.650 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.650 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.650 [2024-12-06 09:45:50.702162] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:25.650 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.650 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:25.650 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:25.650 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.650 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:25.650 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.650 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.650 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.650 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:25.650 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:25.650 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:25.650 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.650 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.650 [2024-12-06 09:45:50.858549] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:25.650 [2024-12-06 09:45:50.858647] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:25.910 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.910 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:25.910 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:25.910 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.910 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:25.910 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.910 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.910 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.910 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:25.910 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:25.910 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:25.910 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:25.910 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:25.910 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:25.910 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.910 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.910 BaseBdev2 00:08:25.910 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.910 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:25.910 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:25.910 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:25.910 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:25.910 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:25.910 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:25.910 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:25.910 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.910 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.910 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.910 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:25.910 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.910 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.910 [ 00:08:25.910 { 00:08:25.910 "name": "BaseBdev2", 00:08:25.910 "aliases": [ 00:08:25.910 "c0665eb5-9427-4fb3-8a35-8f41390f333b" 00:08:25.910 ], 00:08:25.910 "product_name": "Malloc disk", 00:08:25.910 "block_size": 512, 00:08:25.910 "num_blocks": 65536, 00:08:25.910 "uuid": "c0665eb5-9427-4fb3-8a35-8f41390f333b", 00:08:25.910 "assigned_rate_limits": { 00:08:25.910 "rw_ios_per_sec": 0, 00:08:25.910 "rw_mbytes_per_sec": 0, 00:08:25.910 "r_mbytes_per_sec": 0, 00:08:25.910 "w_mbytes_per_sec": 0 00:08:25.910 }, 00:08:25.910 "claimed": false, 00:08:25.910 "zoned": false, 00:08:25.910 "supported_io_types": { 00:08:25.910 "read": true, 00:08:25.910 "write": true, 00:08:25.910 "unmap": true, 00:08:25.910 "flush": true, 00:08:25.910 "reset": true, 00:08:25.910 "nvme_admin": false, 00:08:25.910 "nvme_io": false, 00:08:25.910 "nvme_io_md": false, 00:08:25.910 "write_zeroes": true, 00:08:25.910 "zcopy": true, 00:08:25.910 "get_zone_info": false, 00:08:25.910 "zone_management": false, 00:08:25.910 "zone_append": false, 00:08:25.910 "compare": false, 00:08:25.910 "compare_and_write": false, 00:08:25.910 "abort": true, 00:08:25.910 "seek_hole": false, 00:08:25.910 "seek_data": false, 00:08:25.910 "copy": true, 00:08:25.910 "nvme_iov_md": false 00:08:25.910 }, 00:08:25.910 "memory_domains": [ 00:08:25.910 { 00:08:25.910 "dma_device_id": "system", 00:08:25.910 "dma_device_type": 1 00:08:25.910 }, 00:08:25.910 { 00:08:25.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.910 "dma_device_type": 2 00:08:25.910 } 00:08:25.910 ], 00:08:25.910 "driver_specific": {} 00:08:25.910 } 00:08:25.910 ] 00:08:25.910 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.910 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:25.910 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:25.910 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:25.911 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:25.911 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.911 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.911 BaseBdev3 00:08:25.911 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.911 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:25.911 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:25.911 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:25.911 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:25.911 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:25.911 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:25.911 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:25.911 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.911 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.911 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.911 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:25.911 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.911 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.911 [ 00:08:25.911 { 00:08:25.911 "name": "BaseBdev3", 00:08:25.911 "aliases": [ 00:08:25.911 "a0d90279-ba79-4e09-a8c5-41311c240d6a" 00:08:25.911 ], 00:08:25.911 "product_name": "Malloc disk", 00:08:25.911 "block_size": 512, 00:08:25.911 "num_blocks": 65536, 00:08:25.911 "uuid": "a0d90279-ba79-4e09-a8c5-41311c240d6a", 00:08:25.911 "assigned_rate_limits": { 00:08:25.911 "rw_ios_per_sec": 0, 00:08:25.911 "rw_mbytes_per_sec": 0, 00:08:25.911 "r_mbytes_per_sec": 0, 00:08:25.911 "w_mbytes_per_sec": 0 00:08:25.911 }, 00:08:25.911 "claimed": false, 00:08:25.911 "zoned": false, 00:08:25.911 "supported_io_types": { 00:08:25.911 "read": true, 00:08:25.911 "write": true, 00:08:25.911 "unmap": true, 00:08:25.911 "flush": true, 00:08:25.911 "reset": true, 00:08:25.911 "nvme_admin": false, 00:08:25.911 "nvme_io": false, 00:08:25.911 "nvme_io_md": false, 00:08:25.911 "write_zeroes": true, 00:08:25.911 "zcopy": true, 00:08:25.911 "get_zone_info": false, 00:08:25.911 "zone_management": false, 00:08:25.911 "zone_append": false, 00:08:25.911 "compare": false, 00:08:25.911 "compare_and_write": false, 00:08:25.911 "abort": true, 00:08:25.911 "seek_hole": false, 00:08:25.911 "seek_data": false, 00:08:25.911 "copy": true, 00:08:25.911 "nvme_iov_md": false 00:08:25.911 }, 00:08:25.911 "memory_domains": [ 00:08:25.911 { 00:08:25.911 "dma_device_id": "system", 00:08:25.911 "dma_device_type": 1 00:08:25.911 }, 00:08:25.911 { 00:08:25.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.911 "dma_device_type": 2 00:08:25.911 } 00:08:25.911 ], 00:08:25.911 "driver_specific": {} 00:08:25.911 } 00:08:25.911 ] 00:08:25.911 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.911 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:25.911 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:25.911 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:25.911 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:25.911 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.911 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.911 [2024-12-06 09:45:51.177155] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:25.911 [2024-12-06 09:45:51.177239] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:25.911 [2024-12-06 09:45:51.177282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:25.911 [2024-12-06 09:45:51.179064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:26.171 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.171 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:26.171 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.171 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.171 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.171 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.171 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.171 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.171 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.171 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.171 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.171 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.171 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.171 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.171 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.171 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.171 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.171 "name": "Existed_Raid", 00:08:26.171 "uuid": "b49c6040-adb2-4c7e-a981-9eea27857593", 00:08:26.171 "strip_size_kb": 64, 00:08:26.171 "state": "configuring", 00:08:26.171 "raid_level": "raid0", 00:08:26.171 "superblock": true, 00:08:26.171 "num_base_bdevs": 3, 00:08:26.171 "num_base_bdevs_discovered": 2, 00:08:26.171 "num_base_bdevs_operational": 3, 00:08:26.171 "base_bdevs_list": [ 00:08:26.171 { 00:08:26.171 "name": "BaseBdev1", 00:08:26.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.171 "is_configured": false, 00:08:26.171 "data_offset": 0, 00:08:26.171 "data_size": 0 00:08:26.171 }, 00:08:26.171 { 00:08:26.171 "name": "BaseBdev2", 00:08:26.171 "uuid": "c0665eb5-9427-4fb3-8a35-8f41390f333b", 00:08:26.171 "is_configured": true, 00:08:26.171 "data_offset": 2048, 00:08:26.171 "data_size": 63488 00:08:26.171 }, 00:08:26.171 { 00:08:26.171 "name": "BaseBdev3", 00:08:26.171 "uuid": "a0d90279-ba79-4e09-a8c5-41311c240d6a", 00:08:26.171 "is_configured": true, 00:08:26.171 "data_offset": 2048, 00:08:26.171 "data_size": 63488 00:08:26.172 } 00:08:26.172 ] 00:08:26.172 }' 00:08:26.172 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.172 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.432 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:26.432 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.432 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.432 [2024-12-06 09:45:51.640370] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:26.432 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.432 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:26.432 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.432 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.432 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.432 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.432 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.432 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.432 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.432 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.432 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.432 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.432 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.432 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.432 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.432 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.432 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.432 "name": "Existed_Raid", 00:08:26.432 "uuid": "b49c6040-adb2-4c7e-a981-9eea27857593", 00:08:26.432 "strip_size_kb": 64, 00:08:26.432 "state": "configuring", 00:08:26.432 "raid_level": "raid0", 00:08:26.432 "superblock": true, 00:08:26.432 "num_base_bdevs": 3, 00:08:26.432 "num_base_bdevs_discovered": 1, 00:08:26.432 "num_base_bdevs_operational": 3, 00:08:26.432 "base_bdevs_list": [ 00:08:26.432 { 00:08:26.432 "name": "BaseBdev1", 00:08:26.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.432 "is_configured": false, 00:08:26.432 "data_offset": 0, 00:08:26.432 "data_size": 0 00:08:26.432 }, 00:08:26.432 { 00:08:26.432 "name": null, 00:08:26.432 "uuid": "c0665eb5-9427-4fb3-8a35-8f41390f333b", 00:08:26.432 "is_configured": false, 00:08:26.432 "data_offset": 0, 00:08:26.432 "data_size": 63488 00:08:26.432 }, 00:08:26.432 { 00:08:26.432 "name": "BaseBdev3", 00:08:26.432 "uuid": "a0d90279-ba79-4e09-a8c5-41311c240d6a", 00:08:26.432 "is_configured": true, 00:08:26.432 "data_offset": 2048, 00:08:26.432 "data_size": 63488 00:08:26.432 } 00:08:26.432 ] 00:08:26.432 }' 00:08:26.432 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.432 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.001 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.001 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:27.001 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.001 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.001 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.001 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:27.001 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:27.002 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.002 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.002 [2024-12-06 09:45:52.181581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:27.002 BaseBdev1 00:08:27.002 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.002 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:27.002 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:27.002 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:27.002 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:27.002 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:27.002 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:27.002 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:27.002 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.002 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.002 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.002 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:27.002 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.002 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.002 [ 00:08:27.002 { 00:08:27.002 "name": "BaseBdev1", 00:08:27.002 "aliases": [ 00:08:27.002 "3ad06468-8ec8-4a36-99e7-797fc56c6f25" 00:08:27.002 ], 00:08:27.002 "product_name": "Malloc disk", 00:08:27.002 "block_size": 512, 00:08:27.002 "num_blocks": 65536, 00:08:27.002 "uuid": "3ad06468-8ec8-4a36-99e7-797fc56c6f25", 00:08:27.002 "assigned_rate_limits": { 00:08:27.002 "rw_ios_per_sec": 0, 00:08:27.002 "rw_mbytes_per_sec": 0, 00:08:27.002 "r_mbytes_per_sec": 0, 00:08:27.002 "w_mbytes_per_sec": 0 00:08:27.002 }, 00:08:27.002 "claimed": true, 00:08:27.002 "claim_type": "exclusive_write", 00:08:27.002 "zoned": false, 00:08:27.002 "supported_io_types": { 00:08:27.002 "read": true, 00:08:27.002 "write": true, 00:08:27.002 "unmap": true, 00:08:27.002 "flush": true, 00:08:27.002 "reset": true, 00:08:27.002 "nvme_admin": false, 00:08:27.002 "nvme_io": false, 00:08:27.002 "nvme_io_md": false, 00:08:27.002 "write_zeroes": true, 00:08:27.002 "zcopy": true, 00:08:27.002 "get_zone_info": false, 00:08:27.002 "zone_management": false, 00:08:27.002 "zone_append": false, 00:08:27.002 "compare": false, 00:08:27.002 "compare_and_write": false, 00:08:27.002 "abort": true, 00:08:27.002 "seek_hole": false, 00:08:27.002 "seek_data": false, 00:08:27.002 "copy": true, 00:08:27.002 "nvme_iov_md": false 00:08:27.002 }, 00:08:27.002 "memory_domains": [ 00:08:27.002 { 00:08:27.002 "dma_device_id": "system", 00:08:27.002 "dma_device_type": 1 00:08:27.002 }, 00:08:27.002 { 00:08:27.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.002 "dma_device_type": 2 00:08:27.002 } 00:08:27.002 ], 00:08:27.002 "driver_specific": {} 00:08:27.002 } 00:08:27.002 ] 00:08:27.002 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.002 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:27.002 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:27.002 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.002 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.002 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.002 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.002 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.002 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.002 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.002 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.002 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.002 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.002 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.002 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.002 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.002 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.262 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.262 "name": "Existed_Raid", 00:08:27.262 "uuid": "b49c6040-adb2-4c7e-a981-9eea27857593", 00:08:27.262 "strip_size_kb": 64, 00:08:27.262 "state": "configuring", 00:08:27.262 "raid_level": "raid0", 00:08:27.262 "superblock": true, 00:08:27.262 "num_base_bdevs": 3, 00:08:27.262 "num_base_bdevs_discovered": 2, 00:08:27.262 "num_base_bdevs_operational": 3, 00:08:27.262 "base_bdevs_list": [ 00:08:27.262 { 00:08:27.262 "name": "BaseBdev1", 00:08:27.262 "uuid": "3ad06468-8ec8-4a36-99e7-797fc56c6f25", 00:08:27.262 "is_configured": true, 00:08:27.262 "data_offset": 2048, 00:08:27.262 "data_size": 63488 00:08:27.262 }, 00:08:27.262 { 00:08:27.262 "name": null, 00:08:27.262 "uuid": "c0665eb5-9427-4fb3-8a35-8f41390f333b", 00:08:27.262 "is_configured": false, 00:08:27.262 "data_offset": 0, 00:08:27.262 "data_size": 63488 00:08:27.262 }, 00:08:27.262 { 00:08:27.262 "name": "BaseBdev3", 00:08:27.262 "uuid": "a0d90279-ba79-4e09-a8c5-41311c240d6a", 00:08:27.262 "is_configured": true, 00:08:27.262 "data_offset": 2048, 00:08:27.262 "data_size": 63488 00:08:27.262 } 00:08:27.262 ] 00:08:27.262 }' 00:08:27.262 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.262 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.521 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.521 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.521 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.521 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:27.521 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.521 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:27.521 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:27.521 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.521 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.521 [2024-12-06 09:45:52.752678] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:27.521 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.521 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:27.521 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.521 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.521 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.521 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.521 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.521 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.521 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.521 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.521 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.521 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.521 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.521 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.521 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.521 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.780 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.780 "name": "Existed_Raid", 00:08:27.780 "uuid": "b49c6040-adb2-4c7e-a981-9eea27857593", 00:08:27.780 "strip_size_kb": 64, 00:08:27.780 "state": "configuring", 00:08:27.780 "raid_level": "raid0", 00:08:27.780 "superblock": true, 00:08:27.780 "num_base_bdevs": 3, 00:08:27.780 "num_base_bdevs_discovered": 1, 00:08:27.780 "num_base_bdevs_operational": 3, 00:08:27.780 "base_bdevs_list": [ 00:08:27.780 { 00:08:27.780 "name": "BaseBdev1", 00:08:27.780 "uuid": "3ad06468-8ec8-4a36-99e7-797fc56c6f25", 00:08:27.780 "is_configured": true, 00:08:27.780 "data_offset": 2048, 00:08:27.780 "data_size": 63488 00:08:27.780 }, 00:08:27.780 { 00:08:27.780 "name": null, 00:08:27.780 "uuid": "c0665eb5-9427-4fb3-8a35-8f41390f333b", 00:08:27.780 "is_configured": false, 00:08:27.780 "data_offset": 0, 00:08:27.780 "data_size": 63488 00:08:27.780 }, 00:08:27.780 { 00:08:27.780 "name": null, 00:08:27.780 "uuid": "a0d90279-ba79-4e09-a8c5-41311c240d6a", 00:08:27.780 "is_configured": false, 00:08:27.780 "data_offset": 0, 00:08:27.780 "data_size": 63488 00:08:27.780 } 00:08:27.780 ] 00:08:27.780 }' 00:08:27.780 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.780 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.040 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.040 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.040 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.040 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:28.040 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.040 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:28.040 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:28.040 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.040 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.040 [2024-12-06 09:45:53.283823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:28.040 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.040 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:28.040 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.040 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.040 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.040 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.040 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.040 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.040 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.040 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.040 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.040 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.040 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.040 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.040 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.040 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.299 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.299 "name": "Existed_Raid", 00:08:28.299 "uuid": "b49c6040-adb2-4c7e-a981-9eea27857593", 00:08:28.299 "strip_size_kb": 64, 00:08:28.299 "state": "configuring", 00:08:28.299 "raid_level": "raid0", 00:08:28.299 "superblock": true, 00:08:28.299 "num_base_bdevs": 3, 00:08:28.299 "num_base_bdevs_discovered": 2, 00:08:28.299 "num_base_bdevs_operational": 3, 00:08:28.299 "base_bdevs_list": [ 00:08:28.299 { 00:08:28.299 "name": "BaseBdev1", 00:08:28.299 "uuid": "3ad06468-8ec8-4a36-99e7-797fc56c6f25", 00:08:28.299 "is_configured": true, 00:08:28.299 "data_offset": 2048, 00:08:28.299 "data_size": 63488 00:08:28.299 }, 00:08:28.299 { 00:08:28.299 "name": null, 00:08:28.299 "uuid": "c0665eb5-9427-4fb3-8a35-8f41390f333b", 00:08:28.299 "is_configured": false, 00:08:28.299 "data_offset": 0, 00:08:28.299 "data_size": 63488 00:08:28.299 }, 00:08:28.299 { 00:08:28.299 "name": "BaseBdev3", 00:08:28.299 "uuid": "a0d90279-ba79-4e09-a8c5-41311c240d6a", 00:08:28.299 "is_configured": true, 00:08:28.299 "data_offset": 2048, 00:08:28.299 "data_size": 63488 00:08:28.299 } 00:08:28.299 ] 00:08:28.299 }' 00:08:28.299 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.299 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.558 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:28.558 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.558 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.558 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.558 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.558 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:28.558 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:28.558 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.558 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.558 [2024-12-06 09:45:53.763032] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:28.818 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.818 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:28.818 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.818 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.818 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.818 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.818 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.818 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.818 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.818 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.818 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.818 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.818 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.818 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.818 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.818 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.818 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.818 "name": "Existed_Raid", 00:08:28.818 "uuid": "b49c6040-adb2-4c7e-a981-9eea27857593", 00:08:28.818 "strip_size_kb": 64, 00:08:28.818 "state": "configuring", 00:08:28.818 "raid_level": "raid0", 00:08:28.818 "superblock": true, 00:08:28.818 "num_base_bdevs": 3, 00:08:28.818 "num_base_bdevs_discovered": 1, 00:08:28.818 "num_base_bdevs_operational": 3, 00:08:28.818 "base_bdevs_list": [ 00:08:28.818 { 00:08:28.818 "name": null, 00:08:28.818 "uuid": "3ad06468-8ec8-4a36-99e7-797fc56c6f25", 00:08:28.818 "is_configured": false, 00:08:28.818 "data_offset": 0, 00:08:28.818 "data_size": 63488 00:08:28.818 }, 00:08:28.818 { 00:08:28.818 "name": null, 00:08:28.818 "uuid": "c0665eb5-9427-4fb3-8a35-8f41390f333b", 00:08:28.818 "is_configured": false, 00:08:28.818 "data_offset": 0, 00:08:28.818 "data_size": 63488 00:08:28.818 }, 00:08:28.818 { 00:08:28.818 "name": "BaseBdev3", 00:08:28.818 "uuid": "a0d90279-ba79-4e09-a8c5-41311c240d6a", 00:08:28.818 "is_configured": true, 00:08:28.818 "data_offset": 2048, 00:08:28.818 "data_size": 63488 00:08:28.818 } 00:08:28.818 ] 00:08:28.818 }' 00:08:28.818 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.818 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.078 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:29.078 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.078 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.078 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.078 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.078 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:29.078 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:29.078 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.078 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.078 [2024-12-06 09:45:54.295494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:29.078 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.078 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:29.078 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.078 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.078 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:29.078 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.078 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.078 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.078 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.078 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.078 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.078 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.078 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.078 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.078 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.078 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.337 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.337 "name": "Existed_Raid", 00:08:29.337 "uuid": "b49c6040-adb2-4c7e-a981-9eea27857593", 00:08:29.337 "strip_size_kb": 64, 00:08:29.337 "state": "configuring", 00:08:29.337 "raid_level": "raid0", 00:08:29.337 "superblock": true, 00:08:29.337 "num_base_bdevs": 3, 00:08:29.337 "num_base_bdevs_discovered": 2, 00:08:29.337 "num_base_bdevs_operational": 3, 00:08:29.337 "base_bdevs_list": [ 00:08:29.337 { 00:08:29.337 "name": null, 00:08:29.337 "uuid": "3ad06468-8ec8-4a36-99e7-797fc56c6f25", 00:08:29.337 "is_configured": false, 00:08:29.337 "data_offset": 0, 00:08:29.337 "data_size": 63488 00:08:29.337 }, 00:08:29.337 { 00:08:29.337 "name": "BaseBdev2", 00:08:29.337 "uuid": "c0665eb5-9427-4fb3-8a35-8f41390f333b", 00:08:29.337 "is_configured": true, 00:08:29.337 "data_offset": 2048, 00:08:29.337 "data_size": 63488 00:08:29.337 }, 00:08:29.337 { 00:08:29.337 "name": "BaseBdev3", 00:08:29.337 "uuid": "a0d90279-ba79-4e09-a8c5-41311c240d6a", 00:08:29.337 "is_configured": true, 00:08:29.337 "data_offset": 2048, 00:08:29.337 "data_size": 63488 00:08:29.337 } 00:08:29.337 ] 00:08:29.337 }' 00:08:29.337 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.337 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.596 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:29.596 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.596 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.596 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.596 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.596 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:29.596 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.596 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.596 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.596 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:29.596 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.596 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3ad06468-8ec8-4a36-99e7-797fc56c6f25 00:08:29.596 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.596 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.857 [2024-12-06 09:45:54.877074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:29.857 [2024-12-06 09:45:54.877431] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:29.857 [2024-12-06 09:45:54.877488] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:29.857 [2024-12-06 09:45:54.877760] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:29.857 [2024-12-06 09:45:54.877948] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:29.857 [2024-12-06 09:45:54.877991] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:29.857 NewBaseBdev 00:08:29.857 [2024-12-06 09:45:54.878187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.857 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.857 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:29.857 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:29.857 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:29.857 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:29.857 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:29.857 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:29.857 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:29.857 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.857 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.857 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.857 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:29.857 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.857 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.857 [ 00:08:29.857 { 00:08:29.857 "name": "NewBaseBdev", 00:08:29.857 "aliases": [ 00:08:29.857 "3ad06468-8ec8-4a36-99e7-797fc56c6f25" 00:08:29.857 ], 00:08:29.857 "product_name": "Malloc disk", 00:08:29.857 "block_size": 512, 00:08:29.857 "num_blocks": 65536, 00:08:29.857 "uuid": "3ad06468-8ec8-4a36-99e7-797fc56c6f25", 00:08:29.857 "assigned_rate_limits": { 00:08:29.857 "rw_ios_per_sec": 0, 00:08:29.857 "rw_mbytes_per_sec": 0, 00:08:29.857 "r_mbytes_per_sec": 0, 00:08:29.857 "w_mbytes_per_sec": 0 00:08:29.857 }, 00:08:29.857 "claimed": true, 00:08:29.857 "claim_type": "exclusive_write", 00:08:29.857 "zoned": false, 00:08:29.857 "supported_io_types": { 00:08:29.857 "read": true, 00:08:29.857 "write": true, 00:08:29.857 "unmap": true, 00:08:29.857 "flush": true, 00:08:29.857 "reset": true, 00:08:29.857 "nvme_admin": false, 00:08:29.857 "nvme_io": false, 00:08:29.857 "nvme_io_md": false, 00:08:29.857 "write_zeroes": true, 00:08:29.857 "zcopy": true, 00:08:29.857 "get_zone_info": false, 00:08:29.857 "zone_management": false, 00:08:29.857 "zone_append": false, 00:08:29.857 "compare": false, 00:08:29.857 "compare_and_write": false, 00:08:29.857 "abort": true, 00:08:29.857 "seek_hole": false, 00:08:29.857 "seek_data": false, 00:08:29.857 "copy": true, 00:08:29.857 "nvme_iov_md": false 00:08:29.857 }, 00:08:29.857 "memory_domains": [ 00:08:29.857 { 00:08:29.857 "dma_device_id": "system", 00:08:29.857 "dma_device_type": 1 00:08:29.857 }, 00:08:29.857 { 00:08:29.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.857 "dma_device_type": 2 00:08:29.857 } 00:08:29.857 ], 00:08:29.857 "driver_specific": {} 00:08:29.857 } 00:08:29.857 ] 00:08:29.857 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.857 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:29.857 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:29.857 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.857 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:29.857 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:29.857 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.857 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.857 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.857 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.857 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.857 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.857 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.857 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.857 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.857 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.857 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.857 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.857 "name": "Existed_Raid", 00:08:29.857 "uuid": "b49c6040-adb2-4c7e-a981-9eea27857593", 00:08:29.857 "strip_size_kb": 64, 00:08:29.857 "state": "online", 00:08:29.857 "raid_level": "raid0", 00:08:29.857 "superblock": true, 00:08:29.857 "num_base_bdevs": 3, 00:08:29.857 "num_base_bdevs_discovered": 3, 00:08:29.857 "num_base_bdevs_operational": 3, 00:08:29.857 "base_bdevs_list": [ 00:08:29.857 { 00:08:29.857 "name": "NewBaseBdev", 00:08:29.857 "uuid": "3ad06468-8ec8-4a36-99e7-797fc56c6f25", 00:08:29.857 "is_configured": true, 00:08:29.857 "data_offset": 2048, 00:08:29.857 "data_size": 63488 00:08:29.857 }, 00:08:29.857 { 00:08:29.857 "name": "BaseBdev2", 00:08:29.857 "uuid": "c0665eb5-9427-4fb3-8a35-8f41390f333b", 00:08:29.857 "is_configured": true, 00:08:29.857 "data_offset": 2048, 00:08:29.857 "data_size": 63488 00:08:29.857 }, 00:08:29.857 { 00:08:29.857 "name": "BaseBdev3", 00:08:29.857 "uuid": "a0d90279-ba79-4e09-a8c5-41311c240d6a", 00:08:29.857 "is_configured": true, 00:08:29.857 "data_offset": 2048, 00:08:29.857 "data_size": 63488 00:08:29.857 } 00:08:29.857 ] 00:08:29.857 }' 00:08:29.857 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.857 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.139 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:30.139 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:30.139 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:30.139 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:30.139 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:30.139 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:30.139 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:30.139 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.140 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.140 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:30.140 [2024-12-06 09:45:55.364601] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:30.140 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.140 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:30.140 "name": "Existed_Raid", 00:08:30.140 "aliases": [ 00:08:30.140 "b49c6040-adb2-4c7e-a981-9eea27857593" 00:08:30.140 ], 00:08:30.140 "product_name": "Raid Volume", 00:08:30.140 "block_size": 512, 00:08:30.140 "num_blocks": 190464, 00:08:30.140 "uuid": "b49c6040-adb2-4c7e-a981-9eea27857593", 00:08:30.140 "assigned_rate_limits": { 00:08:30.140 "rw_ios_per_sec": 0, 00:08:30.140 "rw_mbytes_per_sec": 0, 00:08:30.140 "r_mbytes_per_sec": 0, 00:08:30.140 "w_mbytes_per_sec": 0 00:08:30.140 }, 00:08:30.140 "claimed": false, 00:08:30.140 "zoned": false, 00:08:30.140 "supported_io_types": { 00:08:30.140 "read": true, 00:08:30.140 "write": true, 00:08:30.140 "unmap": true, 00:08:30.140 "flush": true, 00:08:30.140 "reset": true, 00:08:30.140 "nvme_admin": false, 00:08:30.140 "nvme_io": false, 00:08:30.140 "nvme_io_md": false, 00:08:30.140 "write_zeroes": true, 00:08:30.140 "zcopy": false, 00:08:30.140 "get_zone_info": false, 00:08:30.140 "zone_management": false, 00:08:30.140 "zone_append": false, 00:08:30.140 "compare": false, 00:08:30.140 "compare_and_write": false, 00:08:30.140 "abort": false, 00:08:30.140 "seek_hole": false, 00:08:30.140 "seek_data": false, 00:08:30.140 "copy": false, 00:08:30.140 "nvme_iov_md": false 00:08:30.140 }, 00:08:30.140 "memory_domains": [ 00:08:30.140 { 00:08:30.140 "dma_device_id": "system", 00:08:30.140 "dma_device_type": 1 00:08:30.140 }, 00:08:30.140 { 00:08:30.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.140 "dma_device_type": 2 00:08:30.140 }, 00:08:30.140 { 00:08:30.140 "dma_device_id": "system", 00:08:30.140 "dma_device_type": 1 00:08:30.140 }, 00:08:30.140 { 00:08:30.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.140 "dma_device_type": 2 00:08:30.140 }, 00:08:30.140 { 00:08:30.140 "dma_device_id": "system", 00:08:30.140 "dma_device_type": 1 00:08:30.140 }, 00:08:30.140 { 00:08:30.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.140 "dma_device_type": 2 00:08:30.140 } 00:08:30.140 ], 00:08:30.140 "driver_specific": { 00:08:30.140 "raid": { 00:08:30.140 "uuid": "b49c6040-adb2-4c7e-a981-9eea27857593", 00:08:30.140 "strip_size_kb": 64, 00:08:30.140 "state": "online", 00:08:30.140 "raid_level": "raid0", 00:08:30.140 "superblock": true, 00:08:30.140 "num_base_bdevs": 3, 00:08:30.140 "num_base_bdevs_discovered": 3, 00:08:30.140 "num_base_bdevs_operational": 3, 00:08:30.140 "base_bdevs_list": [ 00:08:30.140 { 00:08:30.140 "name": "NewBaseBdev", 00:08:30.140 "uuid": "3ad06468-8ec8-4a36-99e7-797fc56c6f25", 00:08:30.140 "is_configured": true, 00:08:30.140 "data_offset": 2048, 00:08:30.140 "data_size": 63488 00:08:30.140 }, 00:08:30.140 { 00:08:30.140 "name": "BaseBdev2", 00:08:30.140 "uuid": "c0665eb5-9427-4fb3-8a35-8f41390f333b", 00:08:30.140 "is_configured": true, 00:08:30.140 "data_offset": 2048, 00:08:30.140 "data_size": 63488 00:08:30.140 }, 00:08:30.140 { 00:08:30.140 "name": "BaseBdev3", 00:08:30.140 "uuid": "a0d90279-ba79-4e09-a8c5-41311c240d6a", 00:08:30.140 "is_configured": true, 00:08:30.140 "data_offset": 2048, 00:08:30.140 "data_size": 63488 00:08:30.140 } 00:08:30.140 ] 00:08:30.140 } 00:08:30.140 } 00:08:30.140 }' 00:08:30.414 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:30.414 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:30.414 BaseBdev2 00:08:30.414 BaseBdev3' 00:08:30.414 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.414 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:30.414 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.414 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:30.414 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.414 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.414 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.414 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.414 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.414 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.414 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.414 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:30.414 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.414 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.414 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.414 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.414 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.414 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.414 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.414 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:30.414 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.414 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.414 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.414 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.414 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.414 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.414 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:30.414 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.414 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.414 [2024-12-06 09:45:55.651802] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:30.414 [2024-12-06 09:45:55.651877] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:30.414 [2024-12-06 09:45:55.651982] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:30.414 [2024-12-06 09:45:55.652061] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:30.415 [2024-12-06 09:45:55.652112] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:30.415 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.415 09:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64400 00:08:30.415 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64400 ']' 00:08:30.415 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64400 00:08:30.415 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:30.415 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:30.415 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64400 00:08:30.673 killing process with pid 64400 00:08:30.673 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:30.673 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:30.673 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64400' 00:08:30.673 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64400 00:08:30.673 [2024-12-06 09:45:55.702035] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:30.673 09:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64400 00:08:30.933 [2024-12-06 09:45:56.008650] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:32.315 09:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:32.315 00:08:32.315 real 0m10.744s 00:08:32.315 user 0m17.157s 00:08:32.315 sys 0m1.842s 00:08:32.315 09:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.315 09:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.315 ************************************ 00:08:32.315 END TEST raid_state_function_test_sb 00:08:32.315 ************************************ 00:08:32.315 09:45:57 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:32.315 09:45:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:32.315 09:45:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.315 09:45:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:32.315 ************************************ 00:08:32.315 START TEST raid_superblock_test 00:08:32.315 ************************************ 00:08:32.315 09:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:08:32.315 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:32.315 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:32.315 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:32.315 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:32.315 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:32.315 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:32.315 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:32.315 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:32.315 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:32.315 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:32.315 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:32.315 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:32.315 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:32.315 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:32.315 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:32.315 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:32.315 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65026 00:08:32.315 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:32.315 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65026 00:08:32.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.315 09:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65026 ']' 00:08:32.315 09:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.315 09:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:32.315 09:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.315 09:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:32.315 09:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.315 [2024-12-06 09:45:57.322015] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:32.315 [2024-12-06 09:45:57.322239] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65026 ] 00:08:32.315 [2024-12-06 09:45:57.494411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.575 [2024-12-06 09:45:57.607705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.575 [2024-12-06 09:45:57.801927] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.575 [2024-12-06 09:45:57.802045] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:33.144 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:33.144 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:33.144 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:33.144 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:33.144 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:33.144 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:33.144 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:33.144 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:33.144 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:33.144 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:33.144 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:33.144 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.144 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.144 malloc1 00:08:33.144 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.144 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:33.144 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.145 [2024-12-06 09:45:58.209030] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:33.145 [2024-12-06 09:45:58.209136] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:33.145 [2024-12-06 09:45:58.209184] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:33.145 [2024-12-06 09:45:58.209234] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:33.145 [2024-12-06 09:45:58.211338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:33.145 [2024-12-06 09:45:58.211405] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:33.145 pt1 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.145 malloc2 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.145 [2024-12-06 09:45:58.265992] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:33.145 [2024-12-06 09:45:58.266107] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:33.145 [2024-12-06 09:45:58.266160] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:33.145 [2024-12-06 09:45:58.266200] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:33.145 [2024-12-06 09:45:58.268362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:33.145 [2024-12-06 09:45:58.268438] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:33.145 pt2 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.145 malloc3 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.145 [2024-12-06 09:45:58.336280] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:33.145 [2024-12-06 09:45:58.336378] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:33.145 [2024-12-06 09:45:58.336417] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:33.145 [2024-12-06 09:45:58.336445] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:33.145 [2024-12-06 09:45:58.338474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:33.145 [2024-12-06 09:45:58.338543] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:33.145 pt3 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.145 [2024-12-06 09:45:58.348291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:33.145 [2024-12-06 09:45:58.349960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:33.145 [2024-12-06 09:45:58.350017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:33.145 [2024-12-06 09:45:58.350195] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:33.145 [2024-12-06 09:45:58.350210] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:33.145 [2024-12-06 09:45:58.350433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:33.145 [2024-12-06 09:45:58.350585] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:33.145 [2024-12-06 09:45:58.350599] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:33.145 [2024-12-06 09:45:58.350767] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.145 "name": "raid_bdev1", 00:08:33.145 "uuid": "cfa7439e-d53d-4c1c-9745-f4bba097088e", 00:08:33.145 "strip_size_kb": 64, 00:08:33.145 "state": "online", 00:08:33.145 "raid_level": "raid0", 00:08:33.145 "superblock": true, 00:08:33.145 "num_base_bdevs": 3, 00:08:33.145 "num_base_bdevs_discovered": 3, 00:08:33.145 "num_base_bdevs_operational": 3, 00:08:33.145 "base_bdevs_list": [ 00:08:33.145 { 00:08:33.145 "name": "pt1", 00:08:33.145 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:33.145 "is_configured": true, 00:08:33.145 "data_offset": 2048, 00:08:33.145 "data_size": 63488 00:08:33.145 }, 00:08:33.145 { 00:08:33.145 "name": "pt2", 00:08:33.145 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:33.145 "is_configured": true, 00:08:33.145 "data_offset": 2048, 00:08:33.145 "data_size": 63488 00:08:33.145 }, 00:08:33.145 { 00:08:33.145 "name": "pt3", 00:08:33.145 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:33.145 "is_configured": true, 00:08:33.145 "data_offset": 2048, 00:08:33.145 "data_size": 63488 00:08:33.145 } 00:08:33.145 ] 00:08:33.145 }' 00:08:33.145 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.146 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.713 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:33.713 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:33.713 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:33.713 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:33.713 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:33.713 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:33.713 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:33.713 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.713 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.713 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:33.713 [2024-12-06 09:45:58.767932] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:33.713 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.713 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:33.713 "name": "raid_bdev1", 00:08:33.713 "aliases": [ 00:08:33.713 "cfa7439e-d53d-4c1c-9745-f4bba097088e" 00:08:33.713 ], 00:08:33.713 "product_name": "Raid Volume", 00:08:33.713 "block_size": 512, 00:08:33.713 "num_blocks": 190464, 00:08:33.713 "uuid": "cfa7439e-d53d-4c1c-9745-f4bba097088e", 00:08:33.713 "assigned_rate_limits": { 00:08:33.713 "rw_ios_per_sec": 0, 00:08:33.714 "rw_mbytes_per_sec": 0, 00:08:33.714 "r_mbytes_per_sec": 0, 00:08:33.714 "w_mbytes_per_sec": 0 00:08:33.714 }, 00:08:33.714 "claimed": false, 00:08:33.714 "zoned": false, 00:08:33.714 "supported_io_types": { 00:08:33.714 "read": true, 00:08:33.714 "write": true, 00:08:33.714 "unmap": true, 00:08:33.714 "flush": true, 00:08:33.714 "reset": true, 00:08:33.714 "nvme_admin": false, 00:08:33.714 "nvme_io": false, 00:08:33.714 "nvme_io_md": false, 00:08:33.714 "write_zeroes": true, 00:08:33.714 "zcopy": false, 00:08:33.714 "get_zone_info": false, 00:08:33.714 "zone_management": false, 00:08:33.714 "zone_append": false, 00:08:33.714 "compare": false, 00:08:33.714 "compare_and_write": false, 00:08:33.714 "abort": false, 00:08:33.714 "seek_hole": false, 00:08:33.714 "seek_data": false, 00:08:33.714 "copy": false, 00:08:33.714 "nvme_iov_md": false 00:08:33.714 }, 00:08:33.714 "memory_domains": [ 00:08:33.714 { 00:08:33.714 "dma_device_id": "system", 00:08:33.714 "dma_device_type": 1 00:08:33.714 }, 00:08:33.714 { 00:08:33.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.714 "dma_device_type": 2 00:08:33.714 }, 00:08:33.714 { 00:08:33.714 "dma_device_id": "system", 00:08:33.714 "dma_device_type": 1 00:08:33.714 }, 00:08:33.714 { 00:08:33.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.714 "dma_device_type": 2 00:08:33.714 }, 00:08:33.714 { 00:08:33.714 "dma_device_id": "system", 00:08:33.714 "dma_device_type": 1 00:08:33.714 }, 00:08:33.714 { 00:08:33.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.714 "dma_device_type": 2 00:08:33.714 } 00:08:33.714 ], 00:08:33.714 "driver_specific": { 00:08:33.714 "raid": { 00:08:33.714 "uuid": "cfa7439e-d53d-4c1c-9745-f4bba097088e", 00:08:33.714 "strip_size_kb": 64, 00:08:33.714 "state": "online", 00:08:33.714 "raid_level": "raid0", 00:08:33.714 "superblock": true, 00:08:33.714 "num_base_bdevs": 3, 00:08:33.714 "num_base_bdevs_discovered": 3, 00:08:33.714 "num_base_bdevs_operational": 3, 00:08:33.714 "base_bdevs_list": [ 00:08:33.714 { 00:08:33.714 "name": "pt1", 00:08:33.714 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:33.714 "is_configured": true, 00:08:33.714 "data_offset": 2048, 00:08:33.714 "data_size": 63488 00:08:33.714 }, 00:08:33.714 { 00:08:33.714 "name": "pt2", 00:08:33.714 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:33.714 "is_configured": true, 00:08:33.714 "data_offset": 2048, 00:08:33.714 "data_size": 63488 00:08:33.714 }, 00:08:33.714 { 00:08:33.714 "name": "pt3", 00:08:33.714 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:33.714 "is_configured": true, 00:08:33.714 "data_offset": 2048, 00:08:33.714 "data_size": 63488 00:08:33.714 } 00:08:33.714 ] 00:08:33.714 } 00:08:33.714 } 00:08:33.714 }' 00:08:33.714 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:33.714 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:33.714 pt2 00:08:33.714 pt3' 00:08:33.714 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.714 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:33.714 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.714 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.714 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:33.714 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.714 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.714 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.714 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.714 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.714 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.714 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.714 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:33.714 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.714 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.714 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.714 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.714 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.714 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.714 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:33.714 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.714 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.714 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.975 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:33.975 [2024-12-06 09:45:59.023568] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=cfa7439e-d53d-4c1c-9745-f4bba097088e 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z cfa7439e-d53d-4c1c-9745-f4bba097088e ']' 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.975 [2024-12-06 09:45:59.075121] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:33.975 [2024-12-06 09:45:59.075206] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:33.975 [2024-12-06 09:45:59.075318] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:33.975 [2024-12-06 09:45:59.075396] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:33.975 [2024-12-06 09:45:59.075429] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.975 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.975 [2024-12-06 09:45:59.218935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:33.975 [2024-12-06 09:45:59.221046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:33.975 [2024-12-06 09:45:59.221179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:33.976 [2024-12-06 09:45:59.221258] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:33.976 [2024-12-06 09:45:59.221351] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:33.976 [2024-12-06 09:45:59.221434] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:33.976 [2024-12-06 09:45:59.221496] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:33.976 [2024-12-06 09:45:59.221525] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:33.976 request: 00:08:33.976 { 00:08:33.976 "name": "raid_bdev1", 00:08:33.976 "raid_level": "raid0", 00:08:33.976 "base_bdevs": [ 00:08:33.976 "malloc1", 00:08:33.976 "malloc2", 00:08:33.976 "malloc3" 00:08:33.976 ], 00:08:33.976 "strip_size_kb": 64, 00:08:33.976 "superblock": false, 00:08:33.976 "method": "bdev_raid_create", 00:08:33.976 "req_id": 1 00:08:33.976 } 00:08:33.976 Got JSON-RPC error response 00:08:33.976 response: 00:08:33.976 { 00:08:33.976 "code": -17, 00:08:33.976 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:33.976 } 00:08:33.976 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:33.976 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:33.976 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:33.976 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:33.976 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:33.976 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.976 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.976 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.976 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:33.976 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.236 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:34.236 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:34.236 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:34.236 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.236 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.236 [2024-12-06 09:45:59.286736] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:34.236 [2024-12-06 09:45:59.286835] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.236 [2024-12-06 09:45:59.286874] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:34.236 [2024-12-06 09:45:59.286902] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.236 [2024-12-06 09:45:59.289169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.236 [2024-12-06 09:45:59.289239] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:34.236 [2024-12-06 09:45:59.289360] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:34.236 [2024-12-06 09:45:59.289443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:34.236 pt1 00:08:34.236 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.236 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:34.236 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:34.236 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.236 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:34.236 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.236 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.236 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.236 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.236 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.236 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.236 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.236 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.236 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.236 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:34.236 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.236 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.236 "name": "raid_bdev1", 00:08:34.236 "uuid": "cfa7439e-d53d-4c1c-9745-f4bba097088e", 00:08:34.236 "strip_size_kb": 64, 00:08:34.236 "state": "configuring", 00:08:34.236 "raid_level": "raid0", 00:08:34.236 "superblock": true, 00:08:34.236 "num_base_bdevs": 3, 00:08:34.236 "num_base_bdevs_discovered": 1, 00:08:34.236 "num_base_bdevs_operational": 3, 00:08:34.236 "base_bdevs_list": [ 00:08:34.236 { 00:08:34.236 "name": "pt1", 00:08:34.236 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:34.236 "is_configured": true, 00:08:34.236 "data_offset": 2048, 00:08:34.236 "data_size": 63488 00:08:34.236 }, 00:08:34.236 { 00:08:34.236 "name": null, 00:08:34.236 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:34.236 "is_configured": false, 00:08:34.236 "data_offset": 2048, 00:08:34.236 "data_size": 63488 00:08:34.236 }, 00:08:34.236 { 00:08:34.236 "name": null, 00:08:34.236 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:34.236 "is_configured": false, 00:08:34.236 "data_offset": 2048, 00:08:34.236 "data_size": 63488 00:08:34.236 } 00:08:34.236 ] 00:08:34.236 }' 00:08:34.236 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.236 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.497 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:34.497 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:34.497 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.497 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.497 [2024-12-06 09:45:59.753961] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:34.497 [2024-12-06 09:45:59.754082] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.497 [2024-12-06 09:45:59.754129] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:34.497 [2024-12-06 09:45:59.754173] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.497 [2024-12-06 09:45:59.754649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.497 [2024-12-06 09:45:59.754708] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:34.497 [2024-12-06 09:45:59.754819] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:34.497 [2024-12-06 09:45:59.754876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:34.497 pt2 00:08:34.497 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.497 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:34.497 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.497 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.497 [2024-12-06 09:45:59.765923] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:34.756 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.756 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:34.756 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:34.756 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.756 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:34.756 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.756 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.756 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.756 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.756 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.756 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.757 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.757 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:34.757 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.757 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.757 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.757 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.757 "name": "raid_bdev1", 00:08:34.757 "uuid": "cfa7439e-d53d-4c1c-9745-f4bba097088e", 00:08:34.757 "strip_size_kb": 64, 00:08:34.757 "state": "configuring", 00:08:34.757 "raid_level": "raid0", 00:08:34.757 "superblock": true, 00:08:34.757 "num_base_bdevs": 3, 00:08:34.757 "num_base_bdevs_discovered": 1, 00:08:34.757 "num_base_bdevs_operational": 3, 00:08:34.757 "base_bdevs_list": [ 00:08:34.757 { 00:08:34.757 "name": "pt1", 00:08:34.757 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:34.757 "is_configured": true, 00:08:34.757 "data_offset": 2048, 00:08:34.757 "data_size": 63488 00:08:34.757 }, 00:08:34.757 { 00:08:34.757 "name": null, 00:08:34.757 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:34.757 "is_configured": false, 00:08:34.757 "data_offset": 0, 00:08:34.757 "data_size": 63488 00:08:34.757 }, 00:08:34.757 { 00:08:34.757 "name": null, 00:08:34.757 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:34.757 "is_configured": false, 00:08:34.757 "data_offset": 2048, 00:08:34.757 "data_size": 63488 00:08:34.757 } 00:08:34.757 ] 00:08:34.757 }' 00:08:34.757 09:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.757 09:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.017 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:35.017 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:35.017 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:35.017 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.017 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.017 [2024-12-06 09:46:00.181224] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:35.017 [2024-12-06 09:46:00.181363] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.017 [2024-12-06 09:46:00.181401] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:35.017 [2024-12-06 09:46:00.181431] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.017 [2024-12-06 09:46:00.181981] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.017 [2024-12-06 09:46:00.182052] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:35.017 [2024-12-06 09:46:00.182185] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:35.017 [2024-12-06 09:46:00.182245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:35.017 pt2 00:08:35.017 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.017 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:35.017 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:35.017 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:35.017 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.017 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.017 [2024-12-06 09:46:00.193209] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:35.017 [2024-12-06 09:46:00.193312] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.017 [2024-12-06 09:46:00.193344] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:35.017 [2024-12-06 09:46:00.193373] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.017 [2024-12-06 09:46:00.193867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.017 [2024-12-06 09:46:00.193931] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:35.017 [2024-12-06 09:46:00.194043] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:35.017 [2024-12-06 09:46:00.194096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:35.017 [2024-12-06 09:46:00.194266] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:35.017 [2024-12-06 09:46:00.194307] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:35.017 [2024-12-06 09:46:00.194585] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:35.017 [2024-12-06 09:46:00.194781] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:35.017 [2024-12-06 09:46:00.194819] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:35.017 [2024-12-06 09:46:00.195012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.017 pt3 00:08:35.017 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.017 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:35.017 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:35.017 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:35.017 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:35.017 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.017 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.017 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.017 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.017 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.017 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.017 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.017 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.017 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.017 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.017 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.017 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:35.017 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.017 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.017 "name": "raid_bdev1", 00:08:35.017 "uuid": "cfa7439e-d53d-4c1c-9745-f4bba097088e", 00:08:35.017 "strip_size_kb": 64, 00:08:35.017 "state": "online", 00:08:35.017 "raid_level": "raid0", 00:08:35.017 "superblock": true, 00:08:35.017 "num_base_bdevs": 3, 00:08:35.017 "num_base_bdevs_discovered": 3, 00:08:35.017 "num_base_bdevs_operational": 3, 00:08:35.017 "base_bdevs_list": [ 00:08:35.017 { 00:08:35.017 "name": "pt1", 00:08:35.017 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:35.017 "is_configured": true, 00:08:35.017 "data_offset": 2048, 00:08:35.017 "data_size": 63488 00:08:35.017 }, 00:08:35.017 { 00:08:35.017 "name": "pt2", 00:08:35.017 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:35.017 "is_configured": true, 00:08:35.017 "data_offset": 2048, 00:08:35.017 "data_size": 63488 00:08:35.017 }, 00:08:35.017 { 00:08:35.017 "name": "pt3", 00:08:35.017 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:35.017 "is_configured": true, 00:08:35.017 "data_offset": 2048, 00:08:35.017 "data_size": 63488 00:08:35.017 } 00:08:35.017 ] 00:08:35.017 }' 00:08:35.017 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.017 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.587 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:35.587 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:35.587 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:35.587 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:35.587 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:35.587 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:35.587 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:35.587 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.587 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.587 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:35.587 [2024-12-06 09:46:00.688701] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:35.587 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.587 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:35.587 "name": "raid_bdev1", 00:08:35.587 "aliases": [ 00:08:35.587 "cfa7439e-d53d-4c1c-9745-f4bba097088e" 00:08:35.587 ], 00:08:35.587 "product_name": "Raid Volume", 00:08:35.587 "block_size": 512, 00:08:35.587 "num_blocks": 190464, 00:08:35.587 "uuid": "cfa7439e-d53d-4c1c-9745-f4bba097088e", 00:08:35.587 "assigned_rate_limits": { 00:08:35.587 "rw_ios_per_sec": 0, 00:08:35.587 "rw_mbytes_per_sec": 0, 00:08:35.587 "r_mbytes_per_sec": 0, 00:08:35.587 "w_mbytes_per_sec": 0 00:08:35.587 }, 00:08:35.587 "claimed": false, 00:08:35.587 "zoned": false, 00:08:35.587 "supported_io_types": { 00:08:35.587 "read": true, 00:08:35.587 "write": true, 00:08:35.587 "unmap": true, 00:08:35.587 "flush": true, 00:08:35.587 "reset": true, 00:08:35.587 "nvme_admin": false, 00:08:35.587 "nvme_io": false, 00:08:35.587 "nvme_io_md": false, 00:08:35.587 "write_zeroes": true, 00:08:35.587 "zcopy": false, 00:08:35.587 "get_zone_info": false, 00:08:35.587 "zone_management": false, 00:08:35.587 "zone_append": false, 00:08:35.587 "compare": false, 00:08:35.587 "compare_and_write": false, 00:08:35.587 "abort": false, 00:08:35.587 "seek_hole": false, 00:08:35.587 "seek_data": false, 00:08:35.587 "copy": false, 00:08:35.587 "nvme_iov_md": false 00:08:35.587 }, 00:08:35.587 "memory_domains": [ 00:08:35.587 { 00:08:35.587 "dma_device_id": "system", 00:08:35.587 "dma_device_type": 1 00:08:35.587 }, 00:08:35.587 { 00:08:35.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.587 "dma_device_type": 2 00:08:35.587 }, 00:08:35.587 { 00:08:35.587 "dma_device_id": "system", 00:08:35.587 "dma_device_type": 1 00:08:35.587 }, 00:08:35.587 { 00:08:35.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.587 "dma_device_type": 2 00:08:35.587 }, 00:08:35.587 { 00:08:35.587 "dma_device_id": "system", 00:08:35.587 "dma_device_type": 1 00:08:35.587 }, 00:08:35.587 { 00:08:35.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.587 "dma_device_type": 2 00:08:35.587 } 00:08:35.587 ], 00:08:35.587 "driver_specific": { 00:08:35.587 "raid": { 00:08:35.587 "uuid": "cfa7439e-d53d-4c1c-9745-f4bba097088e", 00:08:35.587 "strip_size_kb": 64, 00:08:35.587 "state": "online", 00:08:35.587 "raid_level": "raid0", 00:08:35.587 "superblock": true, 00:08:35.587 "num_base_bdevs": 3, 00:08:35.587 "num_base_bdevs_discovered": 3, 00:08:35.587 "num_base_bdevs_operational": 3, 00:08:35.587 "base_bdevs_list": [ 00:08:35.587 { 00:08:35.587 "name": "pt1", 00:08:35.587 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:35.587 "is_configured": true, 00:08:35.587 "data_offset": 2048, 00:08:35.587 "data_size": 63488 00:08:35.587 }, 00:08:35.587 { 00:08:35.587 "name": "pt2", 00:08:35.587 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:35.587 "is_configured": true, 00:08:35.587 "data_offset": 2048, 00:08:35.587 "data_size": 63488 00:08:35.587 }, 00:08:35.587 { 00:08:35.587 "name": "pt3", 00:08:35.587 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:35.587 "is_configured": true, 00:08:35.587 "data_offset": 2048, 00:08:35.587 "data_size": 63488 00:08:35.587 } 00:08:35.587 ] 00:08:35.587 } 00:08:35.587 } 00:08:35.587 }' 00:08:35.587 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:35.587 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:35.587 pt2 00:08:35.587 pt3' 00:08:35.587 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.587 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:35.587 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.587 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:35.587 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.587 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.587 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.587 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.871 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.871 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.871 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.871 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.871 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:35.871 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.871 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.871 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.871 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.871 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.871 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.871 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.871 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:35.871 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.871 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.871 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.871 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.871 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.871 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:35.871 09:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:35.871 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.871 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.871 [2024-12-06 09:46:00.972210] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:35.871 09:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.871 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' cfa7439e-d53d-4c1c-9745-f4bba097088e '!=' cfa7439e-d53d-4c1c-9745-f4bba097088e ']' 00:08:35.871 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:35.871 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:35.871 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:35.871 09:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65026 00:08:35.872 09:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65026 ']' 00:08:35.872 09:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65026 00:08:35.872 09:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:35.872 09:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.872 09:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65026 00:08:35.872 killing process with pid 65026 00:08:35.872 09:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:35.872 09:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:35.872 09:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65026' 00:08:35.872 09:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65026 00:08:35.872 [2024-12-06 09:46:01.055885] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:35.872 [2024-12-06 09:46:01.055990] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:35.872 [2024-12-06 09:46:01.056056] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:35.872 09:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65026 00:08:35.872 [2024-12-06 09:46:01.056070] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:36.136 [2024-12-06 09:46:01.378461] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:37.512 09:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:37.512 00:08:37.512 real 0m5.365s 00:08:37.512 user 0m7.671s 00:08:37.512 sys 0m0.854s 00:08:37.512 09:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.512 09:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.512 ************************************ 00:08:37.512 END TEST raid_superblock_test 00:08:37.512 ************************************ 00:08:37.512 09:46:02 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:37.512 09:46:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:37.512 09:46:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.512 09:46:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:37.512 ************************************ 00:08:37.512 START TEST raid_read_error_test 00:08:37.512 ************************************ 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.P8hzdbe4ZY 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65279 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65279 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65279 ']' 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.512 09:46:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.512 [2024-12-06 09:46:02.768481] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:37.512 [2024-12-06 09:46:02.768721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65279 ] 00:08:37.772 [2024-12-06 09:46:02.941344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.031 [2024-12-06 09:46:03.061981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.031 [2024-12-06 09:46:03.288868] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.031 [2024-12-06 09:46:03.288976] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.599 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.599 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:38.599 09:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:38.599 09:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:38.599 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.599 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.599 BaseBdev1_malloc 00:08:38.599 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.599 09:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:38.599 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.599 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.599 true 00:08:38.599 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.599 09:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:38.599 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.599 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.599 [2024-12-06 09:46:03.696303] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:38.599 [2024-12-06 09:46:03.696440] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:38.599 [2024-12-06 09:46:03.696505] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:38.599 [2024-12-06 09:46:03.696549] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:38.599 [2024-12-06 09:46:03.699039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:38.599 [2024-12-06 09:46:03.699133] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:38.599 BaseBdev1 00:08:38.599 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.599 09:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:38.599 09:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:38.599 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.599 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.599 BaseBdev2_malloc 00:08:38.599 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.599 09:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:38.599 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.599 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.599 true 00:08:38.599 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.599 09:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:38.599 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.599 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.599 [2024-12-06 09:46:03.767409] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:38.599 [2024-12-06 09:46:03.767527] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:38.599 [2024-12-06 09:46:03.767570] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:38.600 [2024-12-06 09:46:03.767614] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:38.600 [2024-12-06 09:46:03.770047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:38.600 [2024-12-06 09:46:03.770136] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:38.600 BaseBdev2 00:08:38.600 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.600 09:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:38.600 09:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:38.600 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.600 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.600 BaseBdev3_malloc 00:08:38.600 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.600 09:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:38.600 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.600 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.600 true 00:08:38.600 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.600 09:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:38.600 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.600 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.600 [2024-12-06 09:46:03.849005] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:38.600 [2024-12-06 09:46:03.849134] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:38.600 [2024-12-06 09:46:03.849193] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:38.600 [2024-12-06 09:46:03.849234] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:38.600 [2024-12-06 09:46:03.851797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:38.600 [2024-12-06 09:46:03.851884] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:38.600 BaseBdev3 00:08:38.600 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.600 09:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:38.600 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.600 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.600 [2024-12-06 09:46:03.861063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:38.600 [2024-12-06 09:46:03.863104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:38.600 [2024-12-06 09:46:03.863204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:38.600 [2024-12-06 09:46:03.863434] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:38.600 [2024-12-06 09:46:03.863450] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:38.600 [2024-12-06 09:46:03.863757] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:38.600 [2024-12-06 09:46:03.863943] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:38.600 [2024-12-06 09:46:03.863959] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:38.600 [2024-12-06 09:46:03.864135] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:38.600 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.600 09:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:38.600 09:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:38.600 09:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:38.600 09:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.600 09:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.600 09:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.600 09:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.600 09:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.600 09:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.600 09:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.860 09:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.860 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.860 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.860 09:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:38.860 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.860 09:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.860 "name": "raid_bdev1", 00:08:38.860 "uuid": "7ea1fbf3-2960-451e-b408-fabd05a756df", 00:08:38.860 "strip_size_kb": 64, 00:08:38.860 "state": "online", 00:08:38.860 "raid_level": "raid0", 00:08:38.860 "superblock": true, 00:08:38.860 "num_base_bdevs": 3, 00:08:38.860 "num_base_bdevs_discovered": 3, 00:08:38.860 "num_base_bdevs_operational": 3, 00:08:38.860 "base_bdevs_list": [ 00:08:38.860 { 00:08:38.860 "name": "BaseBdev1", 00:08:38.860 "uuid": "bdc31e0c-5bef-514d-86f4-44b985b134ab", 00:08:38.860 "is_configured": true, 00:08:38.860 "data_offset": 2048, 00:08:38.860 "data_size": 63488 00:08:38.860 }, 00:08:38.860 { 00:08:38.860 "name": "BaseBdev2", 00:08:38.860 "uuid": "48e4f03d-1b20-599f-920c-d97cd21a21d1", 00:08:38.860 "is_configured": true, 00:08:38.860 "data_offset": 2048, 00:08:38.860 "data_size": 63488 00:08:38.860 }, 00:08:38.860 { 00:08:38.860 "name": "BaseBdev3", 00:08:38.860 "uuid": "9222ef67-f7d2-5396-978a-3cc77fade128", 00:08:38.860 "is_configured": true, 00:08:38.860 "data_offset": 2048, 00:08:38.860 "data_size": 63488 00:08:38.860 } 00:08:38.860 ] 00:08:38.860 }' 00:08:38.860 09:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.860 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.119 09:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:39.119 09:46:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:39.378 [2024-12-06 09:46:04.405583] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:40.318 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:40.318 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.318 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.318 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.318 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:40.318 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:40.318 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:40.318 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:40.318 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:40.318 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:40.318 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.318 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.318 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.318 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.318 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.318 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.318 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.318 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.318 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:40.318 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.318 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.318 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.318 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.318 "name": "raid_bdev1", 00:08:40.318 "uuid": "7ea1fbf3-2960-451e-b408-fabd05a756df", 00:08:40.318 "strip_size_kb": 64, 00:08:40.318 "state": "online", 00:08:40.318 "raid_level": "raid0", 00:08:40.318 "superblock": true, 00:08:40.318 "num_base_bdevs": 3, 00:08:40.318 "num_base_bdevs_discovered": 3, 00:08:40.318 "num_base_bdevs_operational": 3, 00:08:40.318 "base_bdevs_list": [ 00:08:40.319 { 00:08:40.319 "name": "BaseBdev1", 00:08:40.319 "uuid": "bdc31e0c-5bef-514d-86f4-44b985b134ab", 00:08:40.319 "is_configured": true, 00:08:40.319 "data_offset": 2048, 00:08:40.319 "data_size": 63488 00:08:40.319 }, 00:08:40.319 { 00:08:40.319 "name": "BaseBdev2", 00:08:40.319 "uuid": "48e4f03d-1b20-599f-920c-d97cd21a21d1", 00:08:40.319 "is_configured": true, 00:08:40.319 "data_offset": 2048, 00:08:40.319 "data_size": 63488 00:08:40.319 }, 00:08:40.319 { 00:08:40.319 "name": "BaseBdev3", 00:08:40.319 "uuid": "9222ef67-f7d2-5396-978a-3cc77fade128", 00:08:40.319 "is_configured": true, 00:08:40.319 "data_offset": 2048, 00:08:40.319 "data_size": 63488 00:08:40.319 } 00:08:40.319 ] 00:08:40.319 }' 00:08:40.319 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.319 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.577 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:40.577 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.577 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.577 [2024-12-06 09:46:05.765406] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:40.577 [2024-12-06 09:46:05.765489] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:40.577 [2024-12-06 09:46:05.768377] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:40.577 [2024-12-06 09:46:05.768464] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.577 [2024-12-06 09:46:05.768523] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:40.577 [2024-12-06 09:46:05.768563] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:40.577 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.577 { 00:08:40.577 "results": [ 00:08:40.577 { 00:08:40.577 "job": "raid_bdev1", 00:08:40.577 "core_mask": "0x1", 00:08:40.577 "workload": "randrw", 00:08:40.577 "percentage": 50, 00:08:40.577 "status": "finished", 00:08:40.577 "queue_depth": 1, 00:08:40.577 "io_size": 131072, 00:08:40.577 "runtime": 1.360677, 00:08:40.577 "iops": 14452.364521484526, 00:08:40.577 "mibps": 1806.5455651855657, 00:08:40.577 "io_failed": 1, 00:08:40.577 "io_timeout": 0, 00:08:40.577 "avg_latency_us": 95.94582719183288, 00:08:40.577 "min_latency_us": 26.047161572052403, 00:08:40.577 "max_latency_us": 1452.380786026201 00:08:40.577 } 00:08:40.577 ], 00:08:40.577 "core_count": 1 00:08:40.577 } 00:08:40.577 09:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65279 00:08:40.577 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65279 ']' 00:08:40.577 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65279 00:08:40.577 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:40.577 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:40.577 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65279 00:08:40.577 killing process with pid 65279 00:08:40.577 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:40.577 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:40.577 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65279' 00:08:40.577 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65279 00:08:40.577 09:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65279 00:08:40.577 [2024-12-06 09:46:05.806788] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:40.836 [2024-12-06 09:46:06.031181] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:42.219 09:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.P8hzdbe4ZY 00:08:42.219 09:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:42.219 09:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:42.219 09:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:42.219 09:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:42.219 09:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:42.219 09:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:42.219 09:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:42.219 00:08:42.219 real 0m4.720s 00:08:42.219 user 0m5.599s 00:08:42.219 sys 0m0.516s 00:08:42.219 ************************************ 00:08:42.219 END TEST raid_read_error_test 00:08:42.219 ************************************ 00:08:42.219 09:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.219 09:46:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.219 09:46:07 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:42.219 09:46:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:42.219 09:46:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.219 09:46:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:42.219 ************************************ 00:08:42.219 START TEST raid_write_error_test 00:08:42.219 ************************************ 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YQoDVhGUiF 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65425 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65425 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65425 ']' 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.219 09:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.488 [2024-12-06 09:46:07.560190] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:42.488 [2024-12-06 09:46:07.560333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65425 ] 00:08:42.488 [2024-12-06 09:46:07.730695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.746 [2024-12-06 09:46:07.872410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.004 [2024-12-06 09:46:08.089553] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.004 [2024-12-06 09:46:08.089627] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.263 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.263 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:43.263 09:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:43.263 09:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:43.263 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.263 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.263 BaseBdev1_malloc 00:08:43.263 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.263 09:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:43.263 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.263 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.263 true 00:08:43.263 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.263 09:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:43.263 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.263 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.263 [2024-12-06 09:46:08.533136] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:43.263 [2024-12-06 09:46:08.533283] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.263 [2024-12-06 09:46:08.533338] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:43.263 [2024-12-06 09:46:08.533381] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.522 [2024-12-06 09:46:08.535974] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.522 [2024-12-06 09:46:08.536072] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:43.522 BaseBdev1 00:08:43.522 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.522 09:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:43.522 09:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:43.522 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.522 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.522 BaseBdev2_malloc 00:08:43.522 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.522 09:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:43.522 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.522 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.522 true 00:08:43.522 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.522 09:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:43.522 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.522 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.522 [2024-12-06 09:46:08.595863] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:43.522 [2024-12-06 09:46:08.595974] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.522 [2024-12-06 09:46:08.596025] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:43.522 [2024-12-06 09:46:08.596078] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.522 [2024-12-06 09:46:08.598629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.522 [2024-12-06 09:46:08.598720] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:43.522 BaseBdev2 00:08:43.522 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.522 09:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:43.522 09:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:43.522 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.522 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.522 BaseBdev3_malloc 00:08:43.522 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.522 09:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:43.522 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.522 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.522 true 00:08:43.522 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.522 09:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:43.522 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.522 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.523 [2024-12-06 09:46:08.676222] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:43.523 [2024-12-06 09:46:08.676334] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.523 [2024-12-06 09:46:08.676386] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:43.523 [2024-12-06 09:46:08.676429] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.523 [2024-12-06 09:46:08.678830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.523 [2024-12-06 09:46:08.678906] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:43.523 BaseBdev3 00:08:43.523 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.523 09:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:43.523 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.523 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.523 [2024-12-06 09:46:08.688278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:43.523 [2024-12-06 09:46:08.690379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:43.523 [2024-12-06 09:46:08.690528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:43.523 [2024-12-06 09:46:08.690838] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:43.523 [2024-12-06 09:46:08.690901] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:43.523 [2024-12-06 09:46:08.691257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:43.523 [2024-12-06 09:46:08.691506] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:43.523 [2024-12-06 09:46:08.691563] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:43.523 [2024-12-06 09:46:08.691812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:43.523 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.523 09:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:43.523 09:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:43.523 09:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:43.523 09:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.523 09:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.523 09:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.523 09:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.523 09:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.523 09:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.523 09:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.523 09:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.523 09:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:43.523 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.523 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.523 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.523 09:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.523 "name": "raid_bdev1", 00:08:43.523 "uuid": "3a392ae4-e53a-4dc5-969a-ab8d37aaae7a", 00:08:43.523 "strip_size_kb": 64, 00:08:43.523 "state": "online", 00:08:43.523 "raid_level": "raid0", 00:08:43.523 "superblock": true, 00:08:43.523 "num_base_bdevs": 3, 00:08:43.523 "num_base_bdevs_discovered": 3, 00:08:43.523 "num_base_bdevs_operational": 3, 00:08:43.523 "base_bdevs_list": [ 00:08:43.523 { 00:08:43.523 "name": "BaseBdev1", 00:08:43.523 "uuid": "33fc7f83-08cb-58b2-b100-ba8905a88362", 00:08:43.523 "is_configured": true, 00:08:43.523 "data_offset": 2048, 00:08:43.523 "data_size": 63488 00:08:43.523 }, 00:08:43.523 { 00:08:43.523 "name": "BaseBdev2", 00:08:43.523 "uuid": "5ef53d25-72a7-58cf-bd36-6d06f7ec54ab", 00:08:43.523 "is_configured": true, 00:08:43.523 "data_offset": 2048, 00:08:43.523 "data_size": 63488 00:08:43.523 }, 00:08:43.523 { 00:08:43.523 "name": "BaseBdev3", 00:08:43.523 "uuid": "df19062e-3f4b-586b-8d8a-6e985df386b8", 00:08:43.523 "is_configured": true, 00:08:43.523 "data_offset": 2048, 00:08:43.523 "data_size": 63488 00:08:43.523 } 00:08:43.523 ] 00:08:43.523 }' 00:08:43.523 09:46:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.523 09:46:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.089 09:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:44.089 09:46:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:44.089 [2024-12-06 09:46:09.216902] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:45.019 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:45.019 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.019 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.019 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.019 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:45.019 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:45.019 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:45.019 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:45.019 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:45.019 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.019 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:45.019 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.019 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.019 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.019 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.019 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.019 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.019 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.019 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.019 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.019 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:45.019 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.019 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.019 "name": "raid_bdev1", 00:08:45.019 "uuid": "3a392ae4-e53a-4dc5-969a-ab8d37aaae7a", 00:08:45.019 "strip_size_kb": 64, 00:08:45.019 "state": "online", 00:08:45.019 "raid_level": "raid0", 00:08:45.019 "superblock": true, 00:08:45.019 "num_base_bdevs": 3, 00:08:45.019 "num_base_bdevs_discovered": 3, 00:08:45.019 "num_base_bdevs_operational": 3, 00:08:45.019 "base_bdevs_list": [ 00:08:45.019 { 00:08:45.019 "name": "BaseBdev1", 00:08:45.019 "uuid": "33fc7f83-08cb-58b2-b100-ba8905a88362", 00:08:45.019 "is_configured": true, 00:08:45.019 "data_offset": 2048, 00:08:45.019 "data_size": 63488 00:08:45.019 }, 00:08:45.019 { 00:08:45.019 "name": "BaseBdev2", 00:08:45.019 "uuid": "5ef53d25-72a7-58cf-bd36-6d06f7ec54ab", 00:08:45.019 "is_configured": true, 00:08:45.019 "data_offset": 2048, 00:08:45.019 "data_size": 63488 00:08:45.019 }, 00:08:45.019 { 00:08:45.019 "name": "BaseBdev3", 00:08:45.019 "uuid": "df19062e-3f4b-586b-8d8a-6e985df386b8", 00:08:45.019 "is_configured": true, 00:08:45.019 "data_offset": 2048, 00:08:45.019 "data_size": 63488 00:08:45.019 } 00:08:45.019 ] 00:08:45.019 }' 00:08:45.019 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.019 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.276 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:45.276 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.276 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.276 [2024-12-06 09:46:10.478306] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:45.276 [2024-12-06 09:46:10.478340] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:45.276 { 00:08:45.276 "results": [ 00:08:45.276 { 00:08:45.276 "job": "raid_bdev1", 00:08:45.276 "core_mask": "0x1", 00:08:45.276 "workload": "randrw", 00:08:45.276 "percentage": 50, 00:08:45.276 "status": "finished", 00:08:45.276 "queue_depth": 1, 00:08:45.276 "io_size": 131072, 00:08:45.276 "runtime": 1.261604, 00:08:45.276 "iops": 12248.692933757344, 00:08:45.276 "mibps": 1531.086616719668, 00:08:45.276 "io_failed": 1, 00:08:45.276 "io_timeout": 0, 00:08:45.276 "avg_latency_us": 112.98771268217892, 00:08:45.276 "min_latency_us": 28.05938864628821, 00:08:45.276 "max_latency_us": 1810.1100436681222 00:08:45.276 } 00:08:45.276 ], 00:08:45.276 "core_count": 1 00:08:45.276 } 00:08:45.276 [2024-12-06 09:46:10.481640] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:45.276 [2024-12-06 09:46:10.481698] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:45.276 [2024-12-06 09:46:10.481747] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:45.276 [2024-12-06 09:46:10.481760] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:45.276 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.276 09:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65425 00:08:45.276 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65425 ']' 00:08:45.276 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65425 00:08:45.276 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:45.276 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:45.276 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65425 00:08:45.276 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:45.276 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:45.276 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65425' 00:08:45.276 killing process with pid 65425 00:08:45.276 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65425 00:08:45.276 09:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65425 00:08:45.276 [2024-12-06 09:46:10.514595] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:45.532 [2024-12-06 09:46:10.753950] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:46.903 09:46:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:46.903 09:46:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YQoDVhGUiF 00:08:46.903 09:46:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:46.903 09:46:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.79 00:08:46.903 09:46:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:46.903 09:46:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:46.903 09:46:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:46.903 09:46:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.79 != \0\.\0\0 ]] 00:08:46.903 00:08:46.903 real 0m4.565s 00:08:46.903 user 0m5.379s 00:08:46.903 sys 0m0.523s 00:08:46.903 09:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.903 09:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.903 ************************************ 00:08:46.903 END TEST raid_write_error_test 00:08:46.903 ************************************ 00:08:46.903 09:46:12 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:46.903 09:46:12 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:46.903 09:46:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:46.903 09:46:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.903 09:46:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:46.903 ************************************ 00:08:46.903 START TEST raid_state_function_test 00:08:46.903 ************************************ 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65568 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65568' 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:46.903 Process raid pid: 65568 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65568 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65568 ']' 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:46.903 09:46:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.162 [2024-12-06 09:46:12.185034] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:47.162 [2024-12-06 09:46:12.185252] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.162 [2024-12-06 09:46:12.341181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.421 [2024-12-06 09:46:12.478321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.680 [2024-12-06 09:46:12.761917] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.680 [2024-12-06 09:46:12.762075] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.940 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:47.940 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:47.940 09:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:47.940 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.940 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.940 [2024-12-06 09:46:13.034726] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:47.940 [2024-12-06 09:46:13.034784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:47.940 [2024-12-06 09:46:13.034795] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:47.940 [2024-12-06 09:46:13.034806] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:47.940 [2024-12-06 09:46:13.034813] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:47.940 [2024-12-06 09:46:13.034823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:47.940 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.940 09:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:47.940 09:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.940 09:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.940 09:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.940 09:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.940 09:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.940 09:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.940 09:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.940 09:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.940 09:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.940 09:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.940 09:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.940 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.940 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.940 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.940 09:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.940 "name": "Existed_Raid", 00:08:47.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.940 "strip_size_kb": 64, 00:08:47.940 "state": "configuring", 00:08:47.940 "raid_level": "concat", 00:08:47.940 "superblock": false, 00:08:47.940 "num_base_bdevs": 3, 00:08:47.940 "num_base_bdevs_discovered": 0, 00:08:47.940 "num_base_bdevs_operational": 3, 00:08:47.940 "base_bdevs_list": [ 00:08:47.940 { 00:08:47.940 "name": "BaseBdev1", 00:08:47.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.940 "is_configured": false, 00:08:47.940 "data_offset": 0, 00:08:47.940 "data_size": 0 00:08:47.940 }, 00:08:47.940 { 00:08:47.940 "name": "BaseBdev2", 00:08:47.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.940 "is_configured": false, 00:08:47.940 "data_offset": 0, 00:08:47.940 "data_size": 0 00:08:47.940 }, 00:08:47.940 { 00:08:47.940 "name": "BaseBdev3", 00:08:47.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.940 "is_configured": false, 00:08:47.940 "data_offset": 0, 00:08:47.940 "data_size": 0 00:08:47.940 } 00:08:47.940 ] 00:08:47.940 }' 00:08:47.940 09:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.941 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.282 09:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:48.282 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.282 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.282 [2024-12-06 09:46:13.513856] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:48.282 [2024-12-06 09:46:13.513942] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:48.282 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.282 09:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:48.282 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.282 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.282 [2024-12-06 09:46:13.521831] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:48.282 [2024-12-06 09:46:13.521926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:48.282 [2024-12-06 09:46:13.521955] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:48.282 [2024-12-06 09:46:13.521979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:48.282 [2024-12-06 09:46:13.521997] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:48.282 [2024-12-06 09:46:13.522019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:48.282 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.282 09:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:48.282 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.282 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.566 [2024-12-06 09:46:13.566384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:48.566 BaseBdev1 00:08:48.566 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.566 09:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:48.566 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:48.566 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.566 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:48.566 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.566 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.566 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:48.566 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.566 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.566 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.566 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:48.566 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.566 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.566 [ 00:08:48.566 { 00:08:48.566 "name": "BaseBdev1", 00:08:48.566 "aliases": [ 00:08:48.566 "16f0a146-fdd0-4686-9e8a-3f8ef9750c7c" 00:08:48.566 ], 00:08:48.566 "product_name": "Malloc disk", 00:08:48.566 "block_size": 512, 00:08:48.566 "num_blocks": 65536, 00:08:48.566 "uuid": "16f0a146-fdd0-4686-9e8a-3f8ef9750c7c", 00:08:48.566 "assigned_rate_limits": { 00:08:48.566 "rw_ios_per_sec": 0, 00:08:48.566 "rw_mbytes_per_sec": 0, 00:08:48.566 "r_mbytes_per_sec": 0, 00:08:48.566 "w_mbytes_per_sec": 0 00:08:48.566 }, 00:08:48.566 "claimed": true, 00:08:48.566 "claim_type": "exclusive_write", 00:08:48.566 "zoned": false, 00:08:48.566 "supported_io_types": { 00:08:48.566 "read": true, 00:08:48.566 "write": true, 00:08:48.566 "unmap": true, 00:08:48.566 "flush": true, 00:08:48.566 "reset": true, 00:08:48.566 "nvme_admin": false, 00:08:48.566 "nvme_io": false, 00:08:48.566 "nvme_io_md": false, 00:08:48.566 "write_zeroes": true, 00:08:48.566 "zcopy": true, 00:08:48.566 "get_zone_info": false, 00:08:48.566 "zone_management": false, 00:08:48.566 "zone_append": false, 00:08:48.566 "compare": false, 00:08:48.566 "compare_and_write": false, 00:08:48.566 "abort": true, 00:08:48.566 "seek_hole": false, 00:08:48.566 "seek_data": false, 00:08:48.566 "copy": true, 00:08:48.566 "nvme_iov_md": false 00:08:48.566 }, 00:08:48.566 "memory_domains": [ 00:08:48.566 { 00:08:48.566 "dma_device_id": "system", 00:08:48.566 "dma_device_type": 1 00:08:48.566 }, 00:08:48.566 { 00:08:48.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.566 "dma_device_type": 2 00:08:48.566 } 00:08:48.566 ], 00:08:48.566 "driver_specific": {} 00:08:48.566 } 00:08:48.566 ] 00:08:48.566 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.566 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:48.566 09:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:48.566 09:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.566 09:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.566 09:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.566 09:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.566 09:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.566 09:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.566 09:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.566 09:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.566 09:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.566 09:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.566 09:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.566 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.566 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.566 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.566 09:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.566 "name": "Existed_Raid", 00:08:48.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.566 "strip_size_kb": 64, 00:08:48.566 "state": "configuring", 00:08:48.566 "raid_level": "concat", 00:08:48.566 "superblock": false, 00:08:48.566 "num_base_bdevs": 3, 00:08:48.566 "num_base_bdevs_discovered": 1, 00:08:48.566 "num_base_bdevs_operational": 3, 00:08:48.566 "base_bdevs_list": [ 00:08:48.566 { 00:08:48.566 "name": "BaseBdev1", 00:08:48.566 "uuid": "16f0a146-fdd0-4686-9e8a-3f8ef9750c7c", 00:08:48.566 "is_configured": true, 00:08:48.566 "data_offset": 0, 00:08:48.566 "data_size": 65536 00:08:48.566 }, 00:08:48.566 { 00:08:48.566 "name": "BaseBdev2", 00:08:48.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.566 "is_configured": false, 00:08:48.566 "data_offset": 0, 00:08:48.566 "data_size": 0 00:08:48.566 }, 00:08:48.566 { 00:08:48.566 "name": "BaseBdev3", 00:08:48.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.566 "is_configured": false, 00:08:48.566 "data_offset": 0, 00:08:48.567 "data_size": 0 00:08:48.567 } 00:08:48.567 ] 00:08:48.567 }' 00:08:48.567 09:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.567 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.825 09:46:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:48.825 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.825 09:46:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.825 [2024-12-06 09:46:14.001769] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:48.825 [2024-12-06 09:46:14.001911] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:48.825 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.825 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:48.825 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.825 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.825 [2024-12-06 09:46:14.009817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:48.825 [2024-12-06 09:46:14.012477] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:48.825 [2024-12-06 09:46:14.012585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:48.825 [2024-12-06 09:46:14.012643] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:48.825 [2024-12-06 09:46:14.012697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:48.825 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.825 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:48.825 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:48.825 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:48.825 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.825 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.825 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.825 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.825 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.825 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.825 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.825 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.825 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.825 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.825 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.825 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.825 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.825 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.825 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.825 "name": "Existed_Raid", 00:08:48.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.825 "strip_size_kb": 64, 00:08:48.825 "state": "configuring", 00:08:48.825 "raid_level": "concat", 00:08:48.825 "superblock": false, 00:08:48.825 "num_base_bdevs": 3, 00:08:48.825 "num_base_bdevs_discovered": 1, 00:08:48.825 "num_base_bdevs_operational": 3, 00:08:48.825 "base_bdevs_list": [ 00:08:48.825 { 00:08:48.825 "name": "BaseBdev1", 00:08:48.825 "uuid": "16f0a146-fdd0-4686-9e8a-3f8ef9750c7c", 00:08:48.825 "is_configured": true, 00:08:48.825 "data_offset": 0, 00:08:48.825 "data_size": 65536 00:08:48.825 }, 00:08:48.825 { 00:08:48.825 "name": "BaseBdev2", 00:08:48.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.825 "is_configured": false, 00:08:48.825 "data_offset": 0, 00:08:48.825 "data_size": 0 00:08:48.825 }, 00:08:48.825 { 00:08:48.825 "name": "BaseBdev3", 00:08:48.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.825 "is_configured": false, 00:08:48.825 "data_offset": 0, 00:08:48.825 "data_size": 0 00:08:48.825 } 00:08:48.825 ] 00:08:48.825 }' 00:08:48.825 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.825 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.391 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:49.391 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.391 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.391 [2024-12-06 09:46:14.437632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:49.391 BaseBdev2 00:08:49.391 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.391 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:49.391 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:49.391 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:49.391 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:49.391 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:49.391 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:49.391 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:49.391 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.391 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.391 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.391 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:49.391 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.391 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.391 [ 00:08:49.391 { 00:08:49.391 "name": "BaseBdev2", 00:08:49.391 "aliases": [ 00:08:49.391 "4ad61073-e6bf-478d-ade9-240afb5d4095" 00:08:49.391 ], 00:08:49.391 "product_name": "Malloc disk", 00:08:49.392 "block_size": 512, 00:08:49.392 "num_blocks": 65536, 00:08:49.392 "uuid": "4ad61073-e6bf-478d-ade9-240afb5d4095", 00:08:49.392 "assigned_rate_limits": { 00:08:49.392 "rw_ios_per_sec": 0, 00:08:49.392 "rw_mbytes_per_sec": 0, 00:08:49.392 "r_mbytes_per_sec": 0, 00:08:49.392 "w_mbytes_per_sec": 0 00:08:49.392 }, 00:08:49.392 "claimed": true, 00:08:49.392 "claim_type": "exclusive_write", 00:08:49.392 "zoned": false, 00:08:49.392 "supported_io_types": { 00:08:49.392 "read": true, 00:08:49.392 "write": true, 00:08:49.392 "unmap": true, 00:08:49.392 "flush": true, 00:08:49.392 "reset": true, 00:08:49.392 "nvme_admin": false, 00:08:49.392 "nvme_io": false, 00:08:49.392 "nvme_io_md": false, 00:08:49.392 "write_zeroes": true, 00:08:49.392 "zcopy": true, 00:08:49.392 "get_zone_info": false, 00:08:49.392 "zone_management": false, 00:08:49.392 "zone_append": false, 00:08:49.392 "compare": false, 00:08:49.392 "compare_and_write": false, 00:08:49.392 "abort": true, 00:08:49.392 "seek_hole": false, 00:08:49.392 "seek_data": false, 00:08:49.392 "copy": true, 00:08:49.392 "nvme_iov_md": false 00:08:49.392 }, 00:08:49.392 "memory_domains": [ 00:08:49.392 { 00:08:49.392 "dma_device_id": "system", 00:08:49.392 "dma_device_type": 1 00:08:49.392 }, 00:08:49.392 { 00:08:49.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.392 "dma_device_type": 2 00:08:49.392 } 00:08:49.392 ], 00:08:49.392 "driver_specific": {} 00:08:49.392 } 00:08:49.392 ] 00:08:49.392 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.392 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:49.392 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:49.392 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:49.392 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:49.392 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.392 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.392 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.392 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.392 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.392 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.392 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.392 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.392 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.392 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.392 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.392 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.392 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.392 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.392 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.392 "name": "Existed_Raid", 00:08:49.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.392 "strip_size_kb": 64, 00:08:49.392 "state": "configuring", 00:08:49.392 "raid_level": "concat", 00:08:49.392 "superblock": false, 00:08:49.392 "num_base_bdevs": 3, 00:08:49.392 "num_base_bdevs_discovered": 2, 00:08:49.392 "num_base_bdevs_operational": 3, 00:08:49.392 "base_bdevs_list": [ 00:08:49.392 { 00:08:49.392 "name": "BaseBdev1", 00:08:49.392 "uuid": "16f0a146-fdd0-4686-9e8a-3f8ef9750c7c", 00:08:49.392 "is_configured": true, 00:08:49.392 "data_offset": 0, 00:08:49.392 "data_size": 65536 00:08:49.392 }, 00:08:49.392 { 00:08:49.392 "name": "BaseBdev2", 00:08:49.392 "uuid": "4ad61073-e6bf-478d-ade9-240afb5d4095", 00:08:49.392 "is_configured": true, 00:08:49.392 "data_offset": 0, 00:08:49.392 "data_size": 65536 00:08:49.392 }, 00:08:49.392 { 00:08:49.392 "name": "BaseBdev3", 00:08:49.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.392 "is_configured": false, 00:08:49.392 "data_offset": 0, 00:08:49.392 "data_size": 0 00:08:49.392 } 00:08:49.392 ] 00:08:49.392 }' 00:08:49.392 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.392 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.651 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:49.651 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.651 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.651 [2024-12-06 09:46:14.909758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:49.651 [2024-12-06 09:46:14.909905] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:49.651 [2024-12-06 09:46:14.909937] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:49.651 [2024-12-06 09:46:14.910260] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:49.651 [2024-12-06 09:46:14.910480] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:49.651 [2024-12-06 09:46:14.910526] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:49.651 [2024-12-06 09:46:14.910846] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:49.651 BaseBdev3 00:08:49.651 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.651 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:49.651 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:49.651 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:49.651 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:49.651 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:49.651 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:49.651 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:49.651 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.651 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.909 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.909 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:49.909 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.909 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.909 [ 00:08:49.909 { 00:08:49.909 "name": "BaseBdev3", 00:08:49.909 "aliases": [ 00:08:49.909 "78e79225-cbad-4f96-8edf-ddb5278ab08b" 00:08:49.909 ], 00:08:49.909 "product_name": "Malloc disk", 00:08:49.909 "block_size": 512, 00:08:49.909 "num_blocks": 65536, 00:08:49.909 "uuid": "78e79225-cbad-4f96-8edf-ddb5278ab08b", 00:08:49.909 "assigned_rate_limits": { 00:08:49.909 "rw_ios_per_sec": 0, 00:08:49.909 "rw_mbytes_per_sec": 0, 00:08:49.909 "r_mbytes_per_sec": 0, 00:08:49.909 "w_mbytes_per_sec": 0 00:08:49.909 }, 00:08:49.909 "claimed": true, 00:08:49.909 "claim_type": "exclusive_write", 00:08:49.909 "zoned": false, 00:08:49.909 "supported_io_types": { 00:08:49.909 "read": true, 00:08:49.909 "write": true, 00:08:49.909 "unmap": true, 00:08:49.909 "flush": true, 00:08:49.909 "reset": true, 00:08:49.909 "nvme_admin": false, 00:08:49.909 "nvme_io": false, 00:08:49.909 "nvme_io_md": false, 00:08:49.909 "write_zeroes": true, 00:08:49.909 "zcopy": true, 00:08:49.909 "get_zone_info": false, 00:08:49.909 "zone_management": false, 00:08:49.909 "zone_append": false, 00:08:49.909 "compare": false, 00:08:49.909 "compare_and_write": false, 00:08:49.909 "abort": true, 00:08:49.909 "seek_hole": false, 00:08:49.909 "seek_data": false, 00:08:49.909 "copy": true, 00:08:49.909 "nvme_iov_md": false 00:08:49.909 }, 00:08:49.909 "memory_domains": [ 00:08:49.909 { 00:08:49.909 "dma_device_id": "system", 00:08:49.909 "dma_device_type": 1 00:08:49.909 }, 00:08:49.909 { 00:08:49.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.909 "dma_device_type": 2 00:08:49.909 } 00:08:49.909 ], 00:08:49.909 "driver_specific": {} 00:08:49.909 } 00:08:49.909 ] 00:08:49.909 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.909 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:49.909 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:49.909 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:49.909 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:49.909 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.909 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.909 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.909 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.909 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.909 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.909 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.909 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.909 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.909 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.909 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.909 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.909 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.909 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.909 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.909 "name": "Existed_Raid", 00:08:49.909 "uuid": "2c776f6f-f2ef-42b4-99bd-16a2a3f832d0", 00:08:49.909 "strip_size_kb": 64, 00:08:49.909 "state": "online", 00:08:49.909 "raid_level": "concat", 00:08:49.909 "superblock": false, 00:08:49.909 "num_base_bdevs": 3, 00:08:49.909 "num_base_bdevs_discovered": 3, 00:08:49.909 "num_base_bdevs_operational": 3, 00:08:49.909 "base_bdevs_list": [ 00:08:49.909 { 00:08:49.909 "name": "BaseBdev1", 00:08:49.909 "uuid": "16f0a146-fdd0-4686-9e8a-3f8ef9750c7c", 00:08:49.909 "is_configured": true, 00:08:49.909 "data_offset": 0, 00:08:49.909 "data_size": 65536 00:08:49.909 }, 00:08:49.909 { 00:08:49.909 "name": "BaseBdev2", 00:08:49.909 "uuid": "4ad61073-e6bf-478d-ade9-240afb5d4095", 00:08:49.909 "is_configured": true, 00:08:49.909 "data_offset": 0, 00:08:49.909 "data_size": 65536 00:08:49.909 }, 00:08:49.909 { 00:08:49.909 "name": "BaseBdev3", 00:08:49.909 "uuid": "78e79225-cbad-4f96-8edf-ddb5278ab08b", 00:08:49.909 "is_configured": true, 00:08:49.909 "data_offset": 0, 00:08:49.909 "data_size": 65536 00:08:49.909 } 00:08:49.909 ] 00:08:49.909 }' 00:08:49.909 09:46:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.909 09:46:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.167 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:50.167 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:50.167 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:50.167 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:50.167 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:50.167 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:50.167 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:50.167 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:50.167 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.167 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.167 [2024-12-06 09:46:15.381427] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:50.167 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.167 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:50.167 "name": "Existed_Raid", 00:08:50.167 "aliases": [ 00:08:50.167 "2c776f6f-f2ef-42b4-99bd-16a2a3f832d0" 00:08:50.167 ], 00:08:50.167 "product_name": "Raid Volume", 00:08:50.167 "block_size": 512, 00:08:50.167 "num_blocks": 196608, 00:08:50.167 "uuid": "2c776f6f-f2ef-42b4-99bd-16a2a3f832d0", 00:08:50.167 "assigned_rate_limits": { 00:08:50.167 "rw_ios_per_sec": 0, 00:08:50.167 "rw_mbytes_per_sec": 0, 00:08:50.167 "r_mbytes_per_sec": 0, 00:08:50.167 "w_mbytes_per_sec": 0 00:08:50.167 }, 00:08:50.167 "claimed": false, 00:08:50.167 "zoned": false, 00:08:50.167 "supported_io_types": { 00:08:50.167 "read": true, 00:08:50.167 "write": true, 00:08:50.167 "unmap": true, 00:08:50.167 "flush": true, 00:08:50.167 "reset": true, 00:08:50.167 "nvme_admin": false, 00:08:50.167 "nvme_io": false, 00:08:50.167 "nvme_io_md": false, 00:08:50.167 "write_zeroes": true, 00:08:50.167 "zcopy": false, 00:08:50.167 "get_zone_info": false, 00:08:50.167 "zone_management": false, 00:08:50.167 "zone_append": false, 00:08:50.167 "compare": false, 00:08:50.167 "compare_and_write": false, 00:08:50.167 "abort": false, 00:08:50.167 "seek_hole": false, 00:08:50.167 "seek_data": false, 00:08:50.167 "copy": false, 00:08:50.167 "nvme_iov_md": false 00:08:50.167 }, 00:08:50.167 "memory_domains": [ 00:08:50.167 { 00:08:50.167 "dma_device_id": "system", 00:08:50.167 "dma_device_type": 1 00:08:50.167 }, 00:08:50.167 { 00:08:50.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.167 "dma_device_type": 2 00:08:50.167 }, 00:08:50.167 { 00:08:50.167 "dma_device_id": "system", 00:08:50.167 "dma_device_type": 1 00:08:50.167 }, 00:08:50.167 { 00:08:50.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.167 "dma_device_type": 2 00:08:50.167 }, 00:08:50.167 { 00:08:50.167 "dma_device_id": "system", 00:08:50.167 "dma_device_type": 1 00:08:50.167 }, 00:08:50.167 { 00:08:50.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.167 "dma_device_type": 2 00:08:50.167 } 00:08:50.167 ], 00:08:50.167 "driver_specific": { 00:08:50.167 "raid": { 00:08:50.167 "uuid": "2c776f6f-f2ef-42b4-99bd-16a2a3f832d0", 00:08:50.167 "strip_size_kb": 64, 00:08:50.167 "state": "online", 00:08:50.167 "raid_level": "concat", 00:08:50.167 "superblock": false, 00:08:50.167 "num_base_bdevs": 3, 00:08:50.167 "num_base_bdevs_discovered": 3, 00:08:50.167 "num_base_bdevs_operational": 3, 00:08:50.167 "base_bdevs_list": [ 00:08:50.167 { 00:08:50.167 "name": "BaseBdev1", 00:08:50.167 "uuid": "16f0a146-fdd0-4686-9e8a-3f8ef9750c7c", 00:08:50.167 "is_configured": true, 00:08:50.167 "data_offset": 0, 00:08:50.167 "data_size": 65536 00:08:50.167 }, 00:08:50.167 { 00:08:50.167 "name": "BaseBdev2", 00:08:50.167 "uuid": "4ad61073-e6bf-478d-ade9-240afb5d4095", 00:08:50.167 "is_configured": true, 00:08:50.167 "data_offset": 0, 00:08:50.167 "data_size": 65536 00:08:50.167 }, 00:08:50.167 { 00:08:50.167 "name": "BaseBdev3", 00:08:50.167 "uuid": "78e79225-cbad-4f96-8edf-ddb5278ab08b", 00:08:50.167 "is_configured": true, 00:08:50.167 "data_offset": 0, 00:08:50.167 "data_size": 65536 00:08:50.167 } 00:08:50.167 ] 00:08:50.167 } 00:08:50.167 } 00:08:50.167 }' 00:08:50.167 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:50.425 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:50.425 BaseBdev2 00:08:50.425 BaseBdev3' 00:08:50.425 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.425 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:50.425 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.425 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:50.425 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.425 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.425 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.425 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.425 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.425 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.425 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.425 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.425 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:50.425 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.425 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.425 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.425 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.425 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.425 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.425 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:50.425 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.425 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.425 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.425 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.425 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.425 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.425 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:50.425 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.425 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.425 [2024-12-06 09:46:15.644636] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:50.425 [2024-12-06 09:46:15.644707] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:50.425 [2024-12-06 09:46:15.644780] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:50.683 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.683 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:50.683 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:50.683 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:50.683 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:50.683 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:50.684 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:50.684 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.684 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:50.684 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.684 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.684 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:50.684 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.684 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.684 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.684 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.684 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.684 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.684 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.684 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.684 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.684 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.684 "name": "Existed_Raid", 00:08:50.684 "uuid": "2c776f6f-f2ef-42b4-99bd-16a2a3f832d0", 00:08:50.684 "strip_size_kb": 64, 00:08:50.684 "state": "offline", 00:08:50.684 "raid_level": "concat", 00:08:50.684 "superblock": false, 00:08:50.684 "num_base_bdevs": 3, 00:08:50.684 "num_base_bdevs_discovered": 2, 00:08:50.684 "num_base_bdevs_operational": 2, 00:08:50.684 "base_bdevs_list": [ 00:08:50.684 { 00:08:50.684 "name": null, 00:08:50.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.684 "is_configured": false, 00:08:50.684 "data_offset": 0, 00:08:50.684 "data_size": 65536 00:08:50.684 }, 00:08:50.684 { 00:08:50.684 "name": "BaseBdev2", 00:08:50.684 "uuid": "4ad61073-e6bf-478d-ade9-240afb5d4095", 00:08:50.684 "is_configured": true, 00:08:50.684 "data_offset": 0, 00:08:50.684 "data_size": 65536 00:08:50.684 }, 00:08:50.684 { 00:08:50.684 "name": "BaseBdev3", 00:08:50.684 "uuid": "78e79225-cbad-4f96-8edf-ddb5278ab08b", 00:08:50.684 "is_configured": true, 00:08:50.684 "data_offset": 0, 00:08:50.684 "data_size": 65536 00:08:50.684 } 00:08:50.684 ] 00:08:50.684 }' 00:08:50.684 09:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.684 09:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.941 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:50.941 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:50.941 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.941 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.941 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.941 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:51.200 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.200 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:51.200 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:51.200 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:51.200 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.200 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.200 [2024-12-06 09:46:16.256052] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:51.200 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.200 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:51.200 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:51.200 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:51.200 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.200 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.200 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.200 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.200 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:51.200 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:51.200 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:51.200 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.200 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.200 [2024-12-06 09:46:16.391039] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:51.200 [2024-12-06 09:46:16.391130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.460 BaseBdev2 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.460 [ 00:08:51.460 { 00:08:51.460 "name": "BaseBdev2", 00:08:51.460 "aliases": [ 00:08:51.460 "07772ffd-17fc-4541-8578-aaba30b4adba" 00:08:51.460 ], 00:08:51.460 "product_name": "Malloc disk", 00:08:51.460 "block_size": 512, 00:08:51.460 "num_blocks": 65536, 00:08:51.460 "uuid": "07772ffd-17fc-4541-8578-aaba30b4adba", 00:08:51.460 "assigned_rate_limits": { 00:08:51.460 "rw_ios_per_sec": 0, 00:08:51.460 "rw_mbytes_per_sec": 0, 00:08:51.460 "r_mbytes_per_sec": 0, 00:08:51.460 "w_mbytes_per_sec": 0 00:08:51.460 }, 00:08:51.460 "claimed": false, 00:08:51.460 "zoned": false, 00:08:51.460 "supported_io_types": { 00:08:51.460 "read": true, 00:08:51.460 "write": true, 00:08:51.460 "unmap": true, 00:08:51.460 "flush": true, 00:08:51.460 "reset": true, 00:08:51.460 "nvme_admin": false, 00:08:51.460 "nvme_io": false, 00:08:51.460 "nvme_io_md": false, 00:08:51.460 "write_zeroes": true, 00:08:51.460 "zcopy": true, 00:08:51.460 "get_zone_info": false, 00:08:51.460 "zone_management": false, 00:08:51.460 "zone_append": false, 00:08:51.460 "compare": false, 00:08:51.460 "compare_and_write": false, 00:08:51.460 "abort": true, 00:08:51.460 "seek_hole": false, 00:08:51.460 "seek_data": false, 00:08:51.460 "copy": true, 00:08:51.460 "nvme_iov_md": false 00:08:51.460 }, 00:08:51.460 "memory_domains": [ 00:08:51.460 { 00:08:51.460 "dma_device_id": "system", 00:08:51.460 "dma_device_type": 1 00:08:51.460 }, 00:08:51.460 { 00:08:51.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.460 "dma_device_type": 2 00:08:51.460 } 00:08:51.460 ], 00:08:51.460 "driver_specific": {} 00:08:51.460 } 00:08:51.460 ] 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.460 BaseBdev3 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.460 [ 00:08:51.460 { 00:08:51.460 "name": "BaseBdev3", 00:08:51.460 "aliases": [ 00:08:51.460 "7f7215bc-ca47-4e08-b862-4c837f80d439" 00:08:51.460 ], 00:08:51.460 "product_name": "Malloc disk", 00:08:51.460 "block_size": 512, 00:08:51.460 "num_blocks": 65536, 00:08:51.460 "uuid": "7f7215bc-ca47-4e08-b862-4c837f80d439", 00:08:51.460 "assigned_rate_limits": { 00:08:51.460 "rw_ios_per_sec": 0, 00:08:51.460 "rw_mbytes_per_sec": 0, 00:08:51.460 "r_mbytes_per_sec": 0, 00:08:51.460 "w_mbytes_per_sec": 0 00:08:51.460 }, 00:08:51.460 "claimed": false, 00:08:51.460 "zoned": false, 00:08:51.460 "supported_io_types": { 00:08:51.460 "read": true, 00:08:51.460 "write": true, 00:08:51.460 "unmap": true, 00:08:51.460 "flush": true, 00:08:51.460 "reset": true, 00:08:51.460 "nvme_admin": false, 00:08:51.460 "nvme_io": false, 00:08:51.460 "nvme_io_md": false, 00:08:51.460 "write_zeroes": true, 00:08:51.460 "zcopy": true, 00:08:51.460 "get_zone_info": false, 00:08:51.460 "zone_management": false, 00:08:51.460 "zone_append": false, 00:08:51.460 "compare": false, 00:08:51.460 "compare_and_write": false, 00:08:51.460 "abort": true, 00:08:51.460 "seek_hole": false, 00:08:51.460 "seek_data": false, 00:08:51.460 "copy": true, 00:08:51.460 "nvme_iov_md": false 00:08:51.460 }, 00:08:51.460 "memory_domains": [ 00:08:51.460 { 00:08:51.460 "dma_device_id": "system", 00:08:51.460 "dma_device_type": 1 00:08:51.460 }, 00:08:51.460 { 00:08:51.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.460 "dma_device_type": 2 00:08:51.460 } 00:08:51.460 ], 00:08:51.460 "driver_specific": {} 00:08:51.460 } 00:08:51.460 ] 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:51.460 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:51.461 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:51.461 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.461 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.461 [2024-12-06 09:46:16.712135] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:51.461 [2024-12-06 09:46:16.712247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:51.461 [2024-12-06 09:46:16.712297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:51.461 [2024-12-06 09:46:16.714338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:51.461 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.461 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:51.461 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.461 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.461 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.461 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.461 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.461 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.461 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.461 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.461 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.461 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.461 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.461 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.461 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.720 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.720 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.720 "name": "Existed_Raid", 00:08:51.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.720 "strip_size_kb": 64, 00:08:51.720 "state": "configuring", 00:08:51.720 "raid_level": "concat", 00:08:51.720 "superblock": false, 00:08:51.720 "num_base_bdevs": 3, 00:08:51.720 "num_base_bdevs_discovered": 2, 00:08:51.720 "num_base_bdevs_operational": 3, 00:08:51.720 "base_bdevs_list": [ 00:08:51.720 { 00:08:51.720 "name": "BaseBdev1", 00:08:51.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.720 "is_configured": false, 00:08:51.720 "data_offset": 0, 00:08:51.720 "data_size": 0 00:08:51.720 }, 00:08:51.720 { 00:08:51.720 "name": "BaseBdev2", 00:08:51.720 "uuid": "07772ffd-17fc-4541-8578-aaba30b4adba", 00:08:51.720 "is_configured": true, 00:08:51.720 "data_offset": 0, 00:08:51.720 "data_size": 65536 00:08:51.720 }, 00:08:51.720 { 00:08:51.720 "name": "BaseBdev3", 00:08:51.720 "uuid": "7f7215bc-ca47-4e08-b862-4c837f80d439", 00:08:51.720 "is_configured": true, 00:08:51.720 "data_offset": 0, 00:08:51.720 "data_size": 65536 00:08:51.720 } 00:08:51.720 ] 00:08:51.720 }' 00:08:51.720 09:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.720 09:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.979 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:51.979 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.979 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.979 [2024-12-06 09:46:17.167429] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:51.979 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.979 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:51.979 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.979 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.979 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.979 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.979 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.979 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.979 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.979 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.979 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.979 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.979 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.979 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.979 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.979 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.979 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.979 "name": "Existed_Raid", 00:08:51.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.979 "strip_size_kb": 64, 00:08:51.979 "state": "configuring", 00:08:51.979 "raid_level": "concat", 00:08:51.979 "superblock": false, 00:08:51.979 "num_base_bdevs": 3, 00:08:51.979 "num_base_bdevs_discovered": 1, 00:08:51.979 "num_base_bdevs_operational": 3, 00:08:51.979 "base_bdevs_list": [ 00:08:51.979 { 00:08:51.979 "name": "BaseBdev1", 00:08:51.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.979 "is_configured": false, 00:08:51.979 "data_offset": 0, 00:08:51.979 "data_size": 0 00:08:51.979 }, 00:08:51.979 { 00:08:51.979 "name": null, 00:08:51.979 "uuid": "07772ffd-17fc-4541-8578-aaba30b4adba", 00:08:51.979 "is_configured": false, 00:08:51.979 "data_offset": 0, 00:08:51.979 "data_size": 65536 00:08:51.979 }, 00:08:51.979 { 00:08:51.979 "name": "BaseBdev3", 00:08:51.979 "uuid": "7f7215bc-ca47-4e08-b862-4c837f80d439", 00:08:51.979 "is_configured": true, 00:08:51.979 "data_offset": 0, 00:08:51.979 "data_size": 65536 00:08:51.979 } 00:08:51.979 ] 00:08:51.979 }' 00:08:51.979 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.979 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.547 [2024-12-06 09:46:17.691085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:52.547 BaseBdev1 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.547 [ 00:08:52.547 { 00:08:52.547 "name": "BaseBdev1", 00:08:52.547 "aliases": [ 00:08:52.547 "44360b19-bdb4-4add-8350-33cba0f314f3" 00:08:52.547 ], 00:08:52.547 "product_name": "Malloc disk", 00:08:52.547 "block_size": 512, 00:08:52.547 "num_blocks": 65536, 00:08:52.547 "uuid": "44360b19-bdb4-4add-8350-33cba0f314f3", 00:08:52.547 "assigned_rate_limits": { 00:08:52.547 "rw_ios_per_sec": 0, 00:08:52.547 "rw_mbytes_per_sec": 0, 00:08:52.547 "r_mbytes_per_sec": 0, 00:08:52.547 "w_mbytes_per_sec": 0 00:08:52.547 }, 00:08:52.547 "claimed": true, 00:08:52.547 "claim_type": "exclusive_write", 00:08:52.547 "zoned": false, 00:08:52.547 "supported_io_types": { 00:08:52.547 "read": true, 00:08:52.547 "write": true, 00:08:52.547 "unmap": true, 00:08:52.547 "flush": true, 00:08:52.547 "reset": true, 00:08:52.547 "nvme_admin": false, 00:08:52.547 "nvme_io": false, 00:08:52.547 "nvme_io_md": false, 00:08:52.547 "write_zeroes": true, 00:08:52.547 "zcopy": true, 00:08:52.547 "get_zone_info": false, 00:08:52.547 "zone_management": false, 00:08:52.547 "zone_append": false, 00:08:52.547 "compare": false, 00:08:52.547 "compare_and_write": false, 00:08:52.547 "abort": true, 00:08:52.547 "seek_hole": false, 00:08:52.547 "seek_data": false, 00:08:52.547 "copy": true, 00:08:52.547 "nvme_iov_md": false 00:08:52.547 }, 00:08:52.547 "memory_domains": [ 00:08:52.547 { 00:08:52.547 "dma_device_id": "system", 00:08:52.547 "dma_device_type": 1 00:08:52.547 }, 00:08:52.547 { 00:08:52.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.547 "dma_device_type": 2 00:08:52.547 } 00:08:52.547 ], 00:08:52.547 "driver_specific": {} 00:08:52.547 } 00:08:52.547 ] 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.547 "name": "Existed_Raid", 00:08:52.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.547 "strip_size_kb": 64, 00:08:52.547 "state": "configuring", 00:08:52.547 "raid_level": "concat", 00:08:52.547 "superblock": false, 00:08:52.547 "num_base_bdevs": 3, 00:08:52.547 "num_base_bdevs_discovered": 2, 00:08:52.547 "num_base_bdevs_operational": 3, 00:08:52.547 "base_bdevs_list": [ 00:08:52.547 { 00:08:52.547 "name": "BaseBdev1", 00:08:52.547 "uuid": "44360b19-bdb4-4add-8350-33cba0f314f3", 00:08:52.547 "is_configured": true, 00:08:52.547 "data_offset": 0, 00:08:52.547 "data_size": 65536 00:08:52.547 }, 00:08:52.547 { 00:08:52.547 "name": null, 00:08:52.547 "uuid": "07772ffd-17fc-4541-8578-aaba30b4adba", 00:08:52.547 "is_configured": false, 00:08:52.547 "data_offset": 0, 00:08:52.547 "data_size": 65536 00:08:52.547 }, 00:08:52.547 { 00:08:52.547 "name": "BaseBdev3", 00:08:52.547 "uuid": "7f7215bc-ca47-4e08-b862-4c837f80d439", 00:08:52.547 "is_configured": true, 00:08:52.547 "data_offset": 0, 00:08:52.547 "data_size": 65536 00:08:52.547 } 00:08:52.547 ] 00:08:52.547 }' 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.547 09:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.115 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.115 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:53.115 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.115 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.115 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.115 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:53.115 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:53.115 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.115 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.115 [2024-12-06 09:46:18.206276] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:53.115 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.115 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:53.115 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.115 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.115 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:53.115 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.115 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.115 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.115 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.115 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.115 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.115 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.115 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.115 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.115 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.115 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.115 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.115 "name": "Existed_Raid", 00:08:53.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.115 "strip_size_kb": 64, 00:08:53.115 "state": "configuring", 00:08:53.115 "raid_level": "concat", 00:08:53.115 "superblock": false, 00:08:53.115 "num_base_bdevs": 3, 00:08:53.115 "num_base_bdevs_discovered": 1, 00:08:53.115 "num_base_bdevs_operational": 3, 00:08:53.115 "base_bdevs_list": [ 00:08:53.115 { 00:08:53.115 "name": "BaseBdev1", 00:08:53.115 "uuid": "44360b19-bdb4-4add-8350-33cba0f314f3", 00:08:53.115 "is_configured": true, 00:08:53.115 "data_offset": 0, 00:08:53.115 "data_size": 65536 00:08:53.115 }, 00:08:53.115 { 00:08:53.115 "name": null, 00:08:53.115 "uuid": "07772ffd-17fc-4541-8578-aaba30b4adba", 00:08:53.115 "is_configured": false, 00:08:53.115 "data_offset": 0, 00:08:53.115 "data_size": 65536 00:08:53.115 }, 00:08:53.115 { 00:08:53.115 "name": null, 00:08:53.115 "uuid": "7f7215bc-ca47-4e08-b862-4c837f80d439", 00:08:53.115 "is_configured": false, 00:08:53.115 "data_offset": 0, 00:08:53.115 "data_size": 65536 00:08:53.115 } 00:08:53.115 ] 00:08:53.115 }' 00:08:53.115 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.115 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.682 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:53.682 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.682 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.682 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.682 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.682 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:53.682 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:53.682 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.682 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.682 [2024-12-06 09:46:18.713418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:53.682 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.682 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:53.682 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.682 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.682 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:53.682 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.683 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.683 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.683 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.683 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.683 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.683 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.683 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.683 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.683 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.683 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.683 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.683 "name": "Existed_Raid", 00:08:53.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.683 "strip_size_kb": 64, 00:08:53.683 "state": "configuring", 00:08:53.683 "raid_level": "concat", 00:08:53.683 "superblock": false, 00:08:53.683 "num_base_bdevs": 3, 00:08:53.683 "num_base_bdevs_discovered": 2, 00:08:53.683 "num_base_bdevs_operational": 3, 00:08:53.683 "base_bdevs_list": [ 00:08:53.683 { 00:08:53.683 "name": "BaseBdev1", 00:08:53.683 "uuid": "44360b19-bdb4-4add-8350-33cba0f314f3", 00:08:53.683 "is_configured": true, 00:08:53.683 "data_offset": 0, 00:08:53.683 "data_size": 65536 00:08:53.683 }, 00:08:53.683 { 00:08:53.683 "name": null, 00:08:53.683 "uuid": "07772ffd-17fc-4541-8578-aaba30b4adba", 00:08:53.683 "is_configured": false, 00:08:53.683 "data_offset": 0, 00:08:53.683 "data_size": 65536 00:08:53.683 }, 00:08:53.683 { 00:08:53.683 "name": "BaseBdev3", 00:08:53.683 "uuid": "7f7215bc-ca47-4e08-b862-4c837f80d439", 00:08:53.683 "is_configured": true, 00:08:53.683 "data_offset": 0, 00:08:53.683 "data_size": 65536 00:08:53.683 } 00:08:53.683 ] 00:08:53.683 }' 00:08:53.683 09:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.683 09:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.956 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:53.956 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.956 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.956 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.956 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.956 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:53.956 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:53.956 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.956 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.956 [2024-12-06 09:46:19.188673] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:54.214 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.214 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:54.214 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.214 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.214 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:54.214 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.214 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.214 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.214 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.214 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.214 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.214 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.214 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.214 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.214 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.214 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.214 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.214 "name": "Existed_Raid", 00:08:54.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.214 "strip_size_kb": 64, 00:08:54.214 "state": "configuring", 00:08:54.214 "raid_level": "concat", 00:08:54.214 "superblock": false, 00:08:54.214 "num_base_bdevs": 3, 00:08:54.214 "num_base_bdevs_discovered": 1, 00:08:54.214 "num_base_bdevs_operational": 3, 00:08:54.214 "base_bdevs_list": [ 00:08:54.214 { 00:08:54.214 "name": null, 00:08:54.214 "uuid": "44360b19-bdb4-4add-8350-33cba0f314f3", 00:08:54.214 "is_configured": false, 00:08:54.214 "data_offset": 0, 00:08:54.214 "data_size": 65536 00:08:54.214 }, 00:08:54.214 { 00:08:54.214 "name": null, 00:08:54.214 "uuid": "07772ffd-17fc-4541-8578-aaba30b4adba", 00:08:54.214 "is_configured": false, 00:08:54.214 "data_offset": 0, 00:08:54.214 "data_size": 65536 00:08:54.214 }, 00:08:54.214 { 00:08:54.214 "name": "BaseBdev3", 00:08:54.214 "uuid": "7f7215bc-ca47-4e08-b862-4c837f80d439", 00:08:54.214 "is_configured": true, 00:08:54.214 "data_offset": 0, 00:08:54.214 "data_size": 65536 00:08:54.214 } 00:08:54.214 ] 00:08:54.214 }' 00:08:54.214 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.214 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.792 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:54.792 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.792 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.792 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.792 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.792 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:54.792 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:54.792 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.792 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.792 [2024-12-06 09:46:19.815911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:54.792 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.792 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:54.792 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.792 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.792 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:54.792 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.792 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.792 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.792 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.792 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.792 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.792 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.792 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.792 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.792 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.792 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.792 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.792 "name": "Existed_Raid", 00:08:54.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.792 "strip_size_kb": 64, 00:08:54.792 "state": "configuring", 00:08:54.792 "raid_level": "concat", 00:08:54.792 "superblock": false, 00:08:54.792 "num_base_bdevs": 3, 00:08:54.792 "num_base_bdevs_discovered": 2, 00:08:54.792 "num_base_bdevs_operational": 3, 00:08:54.792 "base_bdevs_list": [ 00:08:54.792 { 00:08:54.792 "name": null, 00:08:54.792 "uuid": "44360b19-bdb4-4add-8350-33cba0f314f3", 00:08:54.792 "is_configured": false, 00:08:54.792 "data_offset": 0, 00:08:54.792 "data_size": 65536 00:08:54.792 }, 00:08:54.792 { 00:08:54.792 "name": "BaseBdev2", 00:08:54.792 "uuid": "07772ffd-17fc-4541-8578-aaba30b4adba", 00:08:54.792 "is_configured": true, 00:08:54.792 "data_offset": 0, 00:08:54.792 "data_size": 65536 00:08:54.792 }, 00:08:54.792 { 00:08:54.792 "name": "BaseBdev3", 00:08:54.792 "uuid": "7f7215bc-ca47-4e08-b862-4c837f80d439", 00:08:54.792 "is_configured": true, 00:08:54.792 "data_offset": 0, 00:08:54.792 "data_size": 65536 00:08:54.792 } 00:08:54.792 ] 00:08:54.792 }' 00:08:54.792 09:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.792 09:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.052 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.052 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:55.052 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.052 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.052 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.310 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:55.310 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.310 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.310 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:55.310 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.310 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.310 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 44360b19-bdb4-4add-8350-33cba0f314f3 00:08:55.310 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.310 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.310 [2024-12-06 09:46:20.422503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:55.310 [2024-12-06 09:46:20.422653] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:55.310 [2024-12-06 09:46:20.422686] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:55.310 [2024-12-06 09:46:20.423032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:55.310 [2024-12-06 09:46:20.423313] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:55.310 [2024-12-06 09:46:20.423369] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:55.310 [2024-12-06 09:46:20.423829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.310 NewBaseBdev 00:08:55.310 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.310 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:55.310 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:55.310 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:55.311 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:55.311 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:55.311 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:55.311 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:55.311 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.311 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.311 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.311 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:55.311 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.311 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.311 [ 00:08:55.311 { 00:08:55.311 "name": "NewBaseBdev", 00:08:55.311 "aliases": [ 00:08:55.311 "44360b19-bdb4-4add-8350-33cba0f314f3" 00:08:55.311 ], 00:08:55.311 "product_name": "Malloc disk", 00:08:55.311 "block_size": 512, 00:08:55.311 "num_blocks": 65536, 00:08:55.311 "uuid": "44360b19-bdb4-4add-8350-33cba0f314f3", 00:08:55.311 "assigned_rate_limits": { 00:08:55.311 "rw_ios_per_sec": 0, 00:08:55.311 "rw_mbytes_per_sec": 0, 00:08:55.311 "r_mbytes_per_sec": 0, 00:08:55.311 "w_mbytes_per_sec": 0 00:08:55.311 }, 00:08:55.311 "claimed": true, 00:08:55.311 "claim_type": "exclusive_write", 00:08:55.311 "zoned": false, 00:08:55.311 "supported_io_types": { 00:08:55.311 "read": true, 00:08:55.311 "write": true, 00:08:55.311 "unmap": true, 00:08:55.311 "flush": true, 00:08:55.311 "reset": true, 00:08:55.311 "nvme_admin": false, 00:08:55.311 "nvme_io": false, 00:08:55.311 "nvme_io_md": false, 00:08:55.311 "write_zeroes": true, 00:08:55.311 "zcopy": true, 00:08:55.311 "get_zone_info": false, 00:08:55.311 "zone_management": false, 00:08:55.311 "zone_append": false, 00:08:55.311 "compare": false, 00:08:55.311 "compare_and_write": false, 00:08:55.311 "abort": true, 00:08:55.311 "seek_hole": false, 00:08:55.311 "seek_data": false, 00:08:55.311 "copy": true, 00:08:55.311 "nvme_iov_md": false 00:08:55.311 }, 00:08:55.311 "memory_domains": [ 00:08:55.311 { 00:08:55.311 "dma_device_id": "system", 00:08:55.311 "dma_device_type": 1 00:08:55.311 }, 00:08:55.311 { 00:08:55.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.311 "dma_device_type": 2 00:08:55.311 } 00:08:55.311 ], 00:08:55.311 "driver_specific": {} 00:08:55.311 } 00:08:55.311 ] 00:08:55.311 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.311 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:55.311 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:55.311 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.311 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.311 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:55.311 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.311 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.311 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.311 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.311 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.311 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.311 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.311 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.311 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.311 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.311 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.311 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.311 "name": "Existed_Raid", 00:08:55.311 "uuid": "0d8a45f0-038a-4e9a-a67a-1b426c24d9fe", 00:08:55.311 "strip_size_kb": 64, 00:08:55.311 "state": "online", 00:08:55.311 "raid_level": "concat", 00:08:55.311 "superblock": false, 00:08:55.311 "num_base_bdevs": 3, 00:08:55.311 "num_base_bdevs_discovered": 3, 00:08:55.311 "num_base_bdevs_operational": 3, 00:08:55.311 "base_bdevs_list": [ 00:08:55.311 { 00:08:55.311 "name": "NewBaseBdev", 00:08:55.311 "uuid": "44360b19-bdb4-4add-8350-33cba0f314f3", 00:08:55.311 "is_configured": true, 00:08:55.311 "data_offset": 0, 00:08:55.311 "data_size": 65536 00:08:55.311 }, 00:08:55.311 { 00:08:55.311 "name": "BaseBdev2", 00:08:55.311 "uuid": "07772ffd-17fc-4541-8578-aaba30b4adba", 00:08:55.311 "is_configured": true, 00:08:55.311 "data_offset": 0, 00:08:55.311 "data_size": 65536 00:08:55.311 }, 00:08:55.311 { 00:08:55.311 "name": "BaseBdev3", 00:08:55.311 "uuid": "7f7215bc-ca47-4e08-b862-4c837f80d439", 00:08:55.311 "is_configured": true, 00:08:55.311 "data_offset": 0, 00:08:55.311 "data_size": 65536 00:08:55.311 } 00:08:55.311 ] 00:08:55.311 }' 00:08:55.311 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.311 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.569 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:55.569 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:55.569 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:55.569 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:55.569 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:55.569 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:55.569 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:55.569 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:55.569 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.569 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.569 [2024-12-06 09:46:20.814358] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:55.569 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.828 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:55.828 "name": "Existed_Raid", 00:08:55.828 "aliases": [ 00:08:55.828 "0d8a45f0-038a-4e9a-a67a-1b426c24d9fe" 00:08:55.828 ], 00:08:55.828 "product_name": "Raid Volume", 00:08:55.828 "block_size": 512, 00:08:55.828 "num_blocks": 196608, 00:08:55.828 "uuid": "0d8a45f0-038a-4e9a-a67a-1b426c24d9fe", 00:08:55.828 "assigned_rate_limits": { 00:08:55.828 "rw_ios_per_sec": 0, 00:08:55.828 "rw_mbytes_per_sec": 0, 00:08:55.828 "r_mbytes_per_sec": 0, 00:08:55.828 "w_mbytes_per_sec": 0 00:08:55.828 }, 00:08:55.828 "claimed": false, 00:08:55.828 "zoned": false, 00:08:55.828 "supported_io_types": { 00:08:55.828 "read": true, 00:08:55.828 "write": true, 00:08:55.828 "unmap": true, 00:08:55.828 "flush": true, 00:08:55.828 "reset": true, 00:08:55.828 "nvme_admin": false, 00:08:55.828 "nvme_io": false, 00:08:55.828 "nvme_io_md": false, 00:08:55.828 "write_zeroes": true, 00:08:55.828 "zcopy": false, 00:08:55.828 "get_zone_info": false, 00:08:55.828 "zone_management": false, 00:08:55.828 "zone_append": false, 00:08:55.828 "compare": false, 00:08:55.828 "compare_and_write": false, 00:08:55.828 "abort": false, 00:08:55.828 "seek_hole": false, 00:08:55.828 "seek_data": false, 00:08:55.828 "copy": false, 00:08:55.828 "nvme_iov_md": false 00:08:55.828 }, 00:08:55.828 "memory_domains": [ 00:08:55.828 { 00:08:55.828 "dma_device_id": "system", 00:08:55.828 "dma_device_type": 1 00:08:55.828 }, 00:08:55.828 { 00:08:55.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.828 "dma_device_type": 2 00:08:55.828 }, 00:08:55.828 { 00:08:55.828 "dma_device_id": "system", 00:08:55.828 "dma_device_type": 1 00:08:55.828 }, 00:08:55.828 { 00:08:55.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.828 "dma_device_type": 2 00:08:55.828 }, 00:08:55.828 { 00:08:55.828 "dma_device_id": "system", 00:08:55.828 "dma_device_type": 1 00:08:55.828 }, 00:08:55.828 { 00:08:55.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.828 "dma_device_type": 2 00:08:55.828 } 00:08:55.828 ], 00:08:55.828 "driver_specific": { 00:08:55.828 "raid": { 00:08:55.828 "uuid": "0d8a45f0-038a-4e9a-a67a-1b426c24d9fe", 00:08:55.828 "strip_size_kb": 64, 00:08:55.828 "state": "online", 00:08:55.828 "raid_level": "concat", 00:08:55.828 "superblock": false, 00:08:55.828 "num_base_bdevs": 3, 00:08:55.828 "num_base_bdevs_discovered": 3, 00:08:55.828 "num_base_bdevs_operational": 3, 00:08:55.828 "base_bdevs_list": [ 00:08:55.828 { 00:08:55.828 "name": "NewBaseBdev", 00:08:55.828 "uuid": "44360b19-bdb4-4add-8350-33cba0f314f3", 00:08:55.828 "is_configured": true, 00:08:55.828 "data_offset": 0, 00:08:55.828 "data_size": 65536 00:08:55.828 }, 00:08:55.828 { 00:08:55.828 "name": "BaseBdev2", 00:08:55.828 "uuid": "07772ffd-17fc-4541-8578-aaba30b4adba", 00:08:55.828 "is_configured": true, 00:08:55.828 "data_offset": 0, 00:08:55.828 "data_size": 65536 00:08:55.828 }, 00:08:55.828 { 00:08:55.828 "name": "BaseBdev3", 00:08:55.828 "uuid": "7f7215bc-ca47-4e08-b862-4c837f80d439", 00:08:55.828 "is_configured": true, 00:08:55.828 "data_offset": 0, 00:08:55.828 "data_size": 65536 00:08:55.828 } 00:08:55.828 ] 00:08:55.828 } 00:08:55.828 } 00:08:55.828 }' 00:08:55.828 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:55.828 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:55.828 BaseBdev2 00:08:55.828 BaseBdev3' 00:08:55.828 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.828 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:55.828 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.829 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:55.829 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.829 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.829 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.829 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.829 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.829 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.829 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.829 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:55.829 09:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.829 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.829 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.829 09:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.829 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.829 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.829 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.829 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:55.829 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.829 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.829 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.829 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.829 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.829 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.829 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:55.829 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.829 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.829 [2024-12-06 09:46:21.045618] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:55.829 [2024-12-06 09:46:21.045712] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:55.829 [2024-12-06 09:46:21.045853] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:55.829 [2024-12-06 09:46:21.045963] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:55.829 [2024-12-06 09:46:21.046026] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:55.829 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.829 09:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65568 00:08:55.829 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65568 ']' 00:08:55.829 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65568 00:08:55.829 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:55.829 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:55.829 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65568 00:08:55.829 killing process with pid 65568 00:08:55.829 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:55.829 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:55.829 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65568' 00:08:55.829 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65568 00:08:55.829 09:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65568 00:08:55.829 [2024-12-06 09:46:21.072473] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:56.397 [2024-12-06 09:46:21.402655] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:57.340 ************************************ 00:08:57.340 END TEST raid_state_function_test 00:08:57.340 ************************************ 00:08:57.340 09:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:57.340 00:08:57.340 real 0m10.427s 00:08:57.340 user 0m16.657s 00:08:57.340 sys 0m1.653s 00:08:57.340 09:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.340 09:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.340 09:46:22 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:57.340 09:46:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:57.340 09:46:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.340 09:46:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:57.340 ************************************ 00:08:57.340 START TEST raid_state_function_test_sb 00:08:57.340 ************************************ 00:08:57.340 09:46:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:08:57.340 09:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:57.340 09:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:57.340 09:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:57.340 09:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:57.341 09:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:57.341 09:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:57.341 09:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:57.341 09:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:57.341 09:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:57.341 09:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:57.341 09:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:57.341 09:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:57.341 09:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:57.341 09:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:57.341 09:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:57.341 09:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:57.341 09:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:57.341 09:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:57.341 09:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:57.341 09:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:57.341 09:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:57.341 09:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:57.341 09:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:57.341 09:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:57.341 09:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:57.341 09:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:57.341 09:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66190 00:08:57.341 09:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:57.341 09:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66190' 00:08:57.341 Process raid pid: 66190 00:08:57.341 09:46:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66190 00:08:57.341 09:46:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66190 ']' 00:08:57.341 09:46:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.341 09:46:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:57.341 09:46:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.341 09:46:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:57.341 09:46:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.601 [2024-12-06 09:46:22.668173] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:08:57.602 [2024-12-06 09:46:22.668359] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.602 [2024-12-06 09:46:22.832812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.862 [2024-12-06 09:46:22.961468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.121 [2024-12-06 09:46:23.162187] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:58.121 [2024-12-06 09:46:23.162231] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:58.381 09:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:58.381 09:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:58.381 09:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:58.381 09:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.381 09:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.381 [2024-12-06 09:46:23.523958] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:58.381 [2024-12-06 09:46:23.524076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:58.381 [2024-12-06 09:46:23.524110] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:58.381 [2024-12-06 09:46:23.524134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:58.381 [2024-12-06 09:46:23.524171] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:58.381 [2024-12-06 09:46:23.524200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:58.381 09:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.381 09:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:58.381 09:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.381 09:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.381 09:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.381 09:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.381 09:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.381 09:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.381 09:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.381 09:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.381 09:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.382 09:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.382 09:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.382 09:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.382 09:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.382 09:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.382 09:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.382 "name": "Existed_Raid", 00:08:58.382 "uuid": "50f48ef5-d562-4435-9367-b685bc03940d", 00:08:58.382 "strip_size_kb": 64, 00:08:58.382 "state": "configuring", 00:08:58.382 "raid_level": "concat", 00:08:58.382 "superblock": true, 00:08:58.382 "num_base_bdevs": 3, 00:08:58.382 "num_base_bdevs_discovered": 0, 00:08:58.382 "num_base_bdevs_operational": 3, 00:08:58.382 "base_bdevs_list": [ 00:08:58.382 { 00:08:58.382 "name": "BaseBdev1", 00:08:58.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.382 "is_configured": false, 00:08:58.382 "data_offset": 0, 00:08:58.382 "data_size": 0 00:08:58.382 }, 00:08:58.382 { 00:08:58.382 "name": "BaseBdev2", 00:08:58.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.382 "is_configured": false, 00:08:58.382 "data_offset": 0, 00:08:58.382 "data_size": 0 00:08:58.382 }, 00:08:58.382 { 00:08:58.382 "name": "BaseBdev3", 00:08:58.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.382 "is_configured": false, 00:08:58.382 "data_offset": 0, 00:08:58.382 "data_size": 0 00:08:58.382 } 00:08:58.382 ] 00:08:58.382 }' 00:08:58.382 09:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.382 09:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.953 09:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:58.953 09:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.953 09:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.953 [2024-12-06 09:46:23.983214] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:58.953 [2024-12-06 09:46:23.983323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:58.953 09:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.953 09:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:58.953 09:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.953 09:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.953 [2024-12-06 09:46:23.995214] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:58.953 [2024-12-06 09:46:23.995312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:58.953 [2024-12-06 09:46:23.995351] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:58.953 [2024-12-06 09:46:23.995377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:58.953 [2024-12-06 09:46:23.995396] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:58.953 [2024-12-06 09:46:23.995417] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:58.953 09:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.953 09:46:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:58.953 09:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.953 09:46:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.953 [2024-12-06 09:46:24.041850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:58.953 BaseBdev1 00:08:58.953 09:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.953 09:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:58.953 09:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:58.953 09:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:58.953 09:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:58.953 09:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:58.953 09:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:58.953 09:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:58.953 09:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.953 09:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.953 09:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.953 09:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:58.953 09:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.953 09:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.953 [ 00:08:58.953 { 00:08:58.953 "name": "BaseBdev1", 00:08:58.953 "aliases": [ 00:08:58.953 "b430a796-59b3-40d3-995b-6181bba60198" 00:08:58.953 ], 00:08:58.953 "product_name": "Malloc disk", 00:08:58.953 "block_size": 512, 00:08:58.953 "num_blocks": 65536, 00:08:58.953 "uuid": "b430a796-59b3-40d3-995b-6181bba60198", 00:08:58.953 "assigned_rate_limits": { 00:08:58.953 "rw_ios_per_sec": 0, 00:08:58.953 "rw_mbytes_per_sec": 0, 00:08:58.953 "r_mbytes_per_sec": 0, 00:08:58.953 "w_mbytes_per_sec": 0 00:08:58.953 }, 00:08:58.953 "claimed": true, 00:08:58.953 "claim_type": "exclusive_write", 00:08:58.953 "zoned": false, 00:08:58.953 "supported_io_types": { 00:08:58.953 "read": true, 00:08:58.953 "write": true, 00:08:58.953 "unmap": true, 00:08:58.953 "flush": true, 00:08:58.953 "reset": true, 00:08:58.953 "nvme_admin": false, 00:08:58.953 "nvme_io": false, 00:08:58.953 "nvme_io_md": false, 00:08:58.953 "write_zeroes": true, 00:08:58.953 "zcopy": true, 00:08:58.953 "get_zone_info": false, 00:08:58.953 "zone_management": false, 00:08:58.953 "zone_append": false, 00:08:58.953 "compare": false, 00:08:58.953 "compare_and_write": false, 00:08:58.953 "abort": true, 00:08:58.953 "seek_hole": false, 00:08:58.953 "seek_data": false, 00:08:58.953 "copy": true, 00:08:58.953 "nvme_iov_md": false 00:08:58.953 }, 00:08:58.953 "memory_domains": [ 00:08:58.953 { 00:08:58.953 "dma_device_id": "system", 00:08:58.953 "dma_device_type": 1 00:08:58.953 }, 00:08:58.953 { 00:08:58.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.953 "dma_device_type": 2 00:08:58.953 } 00:08:58.953 ], 00:08:58.953 "driver_specific": {} 00:08:58.953 } 00:08:58.953 ] 00:08:58.953 09:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.953 09:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:58.953 09:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:58.953 09:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.953 09:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.953 09:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.953 09:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.953 09:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.953 09:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.953 09:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.953 09:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.953 09:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.953 09:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.953 09:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.953 09:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.953 09:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.953 09:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.953 09:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.953 "name": "Existed_Raid", 00:08:58.953 "uuid": "4b047c51-90b8-40f3-b135-16aad2a2f4e1", 00:08:58.953 "strip_size_kb": 64, 00:08:58.953 "state": "configuring", 00:08:58.953 "raid_level": "concat", 00:08:58.953 "superblock": true, 00:08:58.953 "num_base_bdevs": 3, 00:08:58.953 "num_base_bdevs_discovered": 1, 00:08:58.953 "num_base_bdevs_operational": 3, 00:08:58.953 "base_bdevs_list": [ 00:08:58.953 { 00:08:58.953 "name": "BaseBdev1", 00:08:58.953 "uuid": "b430a796-59b3-40d3-995b-6181bba60198", 00:08:58.953 "is_configured": true, 00:08:58.953 "data_offset": 2048, 00:08:58.953 "data_size": 63488 00:08:58.953 }, 00:08:58.953 { 00:08:58.953 "name": "BaseBdev2", 00:08:58.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.953 "is_configured": false, 00:08:58.953 "data_offset": 0, 00:08:58.953 "data_size": 0 00:08:58.953 }, 00:08:58.953 { 00:08:58.953 "name": "BaseBdev3", 00:08:58.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.954 "is_configured": false, 00:08:58.954 "data_offset": 0, 00:08:58.954 "data_size": 0 00:08:58.954 } 00:08:58.954 ] 00:08:58.954 }' 00:08:58.954 09:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.954 09:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.522 09:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:59.522 09:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.522 09:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.522 [2024-12-06 09:46:24.557058] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:59.522 [2024-12-06 09:46:24.557176] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:59.522 09:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.522 09:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:59.522 09:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.522 09:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.522 [2024-12-06 09:46:24.569102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:59.522 [2024-12-06 09:46:24.570985] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:59.522 [2024-12-06 09:46:24.571083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:59.522 [2024-12-06 09:46:24.571117] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:59.522 [2024-12-06 09:46:24.571156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:59.522 09:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.522 09:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:59.522 09:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:59.522 09:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:59.522 09:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.522 09:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.522 09:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.522 09:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.522 09:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.522 09:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.522 09:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.522 09:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.522 09:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.522 09:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.522 09:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.522 09:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.522 09:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.522 09:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.522 09:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.522 "name": "Existed_Raid", 00:08:59.522 "uuid": "62f763bb-0d89-42a7-a1e2-3b5d7cc40eb1", 00:08:59.522 "strip_size_kb": 64, 00:08:59.522 "state": "configuring", 00:08:59.522 "raid_level": "concat", 00:08:59.522 "superblock": true, 00:08:59.522 "num_base_bdevs": 3, 00:08:59.522 "num_base_bdevs_discovered": 1, 00:08:59.522 "num_base_bdevs_operational": 3, 00:08:59.522 "base_bdevs_list": [ 00:08:59.522 { 00:08:59.522 "name": "BaseBdev1", 00:08:59.522 "uuid": "b430a796-59b3-40d3-995b-6181bba60198", 00:08:59.522 "is_configured": true, 00:08:59.522 "data_offset": 2048, 00:08:59.522 "data_size": 63488 00:08:59.522 }, 00:08:59.522 { 00:08:59.522 "name": "BaseBdev2", 00:08:59.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.522 "is_configured": false, 00:08:59.522 "data_offset": 0, 00:08:59.522 "data_size": 0 00:08:59.522 }, 00:08:59.522 { 00:08:59.522 "name": "BaseBdev3", 00:08:59.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.522 "is_configured": false, 00:08:59.522 "data_offset": 0, 00:08:59.522 "data_size": 0 00:08:59.522 } 00:08:59.522 ] 00:08:59.522 }' 00:08:59.522 09:46:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.522 09:46:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.852 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:59.852 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.852 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.852 [2024-12-06 09:46:25.069693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:59.852 BaseBdev2 00:08:59.853 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.853 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:59.853 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:59.853 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:59.853 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:59.853 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:59.853 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:59.853 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:59.853 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.853 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.853 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.853 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:59.853 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.853 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.853 [ 00:08:59.853 { 00:08:59.853 "name": "BaseBdev2", 00:08:59.853 "aliases": [ 00:08:59.853 "93ca2c56-4aff-43bb-b76e-38a2b9275e42" 00:08:59.853 ], 00:08:59.853 "product_name": "Malloc disk", 00:08:59.853 "block_size": 512, 00:08:59.853 "num_blocks": 65536, 00:08:59.853 "uuid": "93ca2c56-4aff-43bb-b76e-38a2b9275e42", 00:08:59.853 "assigned_rate_limits": { 00:08:59.853 "rw_ios_per_sec": 0, 00:08:59.853 "rw_mbytes_per_sec": 0, 00:08:59.853 "r_mbytes_per_sec": 0, 00:08:59.853 "w_mbytes_per_sec": 0 00:08:59.853 }, 00:08:59.853 "claimed": true, 00:08:59.853 "claim_type": "exclusive_write", 00:08:59.853 "zoned": false, 00:08:59.853 "supported_io_types": { 00:08:59.853 "read": true, 00:08:59.853 "write": true, 00:08:59.853 "unmap": true, 00:08:59.853 "flush": true, 00:08:59.853 "reset": true, 00:08:59.853 "nvme_admin": false, 00:08:59.853 "nvme_io": false, 00:08:59.853 "nvme_io_md": false, 00:08:59.853 "write_zeroes": true, 00:08:59.853 "zcopy": true, 00:08:59.853 "get_zone_info": false, 00:08:59.853 "zone_management": false, 00:08:59.853 "zone_append": false, 00:08:59.853 "compare": false, 00:08:59.853 "compare_and_write": false, 00:08:59.853 "abort": true, 00:08:59.853 "seek_hole": false, 00:08:59.853 "seek_data": false, 00:08:59.853 "copy": true, 00:08:59.853 "nvme_iov_md": false 00:08:59.853 }, 00:08:59.853 "memory_domains": [ 00:08:59.853 { 00:08:59.853 "dma_device_id": "system", 00:08:59.853 "dma_device_type": 1 00:08:59.853 }, 00:08:59.853 { 00:08:59.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.853 "dma_device_type": 2 00:08:59.853 } 00:08:59.853 ], 00:08:59.853 "driver_specific": {} 00:08:59.853 } 00:08:59.853 ] 00:08:59.853 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.853 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:59.853 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:59.853 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:59.853 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:59.853 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.853 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.853 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.853 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.853 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.853 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.853 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.853 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.853 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.853 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.853 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.853 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.853 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.128 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.128 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.128 "name": "Existed_Raid", 00:09:00.128 "uuid": "62f763bb-0d89-42a7-a1e2-3b5d7cc40eb1", 00:09:00.128 "strip_size_kb": 64, 00:09:00.128 "state": "configuring", 00:09:00.128 "raid_level": "concat", 00:09:00.129 "superblock": true, 00:09:00.129 "num_base_bdevs": 3, 00:09:00.129 "num_base_bdevs_discovered": 2, 00:09:00.129 "num_base_bdevs_operational": 3, 00:09:00.129 "base_bdevs_list": [ 00:09:00.129 { 00:09:00.129 "name": "BaseBdev1", 00:09:00.129 "uuid": "b430a796-59b3-40d3-995b-6181bba60198", 00:09:00.129 "is_configured": true, 00:09:00.129 "data_offset": 2048, 00:09:00.129 "data_size": 63488 00:09:00.129 }, 00:09:00.129 { 00:09:00.129 "name": "BaseBdev2", 00:09:00.129 "uuid": "93ca2c56-4aff-43bb-b76e-38a2b9275e42", 00:09:00.129 "is_configured": true, 00:09:00.129 "data_offset": 2048, 00:09:00.129 "data_size": 63488 00:09:00.129 }, 00:09:00.129 { 00:09:00.129 "name": "BaseBdev3", 00:09:00.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.129 "is_configured": false, 00:09:00.129 "data_offset": 0, 00:09:00.129 "data_size": 0 00:09:00.129 } 00:09:00.129 ] 00:09:00.129 }' 00:09:00.129 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.129 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.392 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:00.392 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.392 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.392 [2024-12-06 09:46:25.555731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:00.392 [2024-12-06 09:46:25.556106] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:00.392 [2024-12-06 09:46:25.556185] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:00.392 [2024-12-06 09:46:25.556492] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:00.392 BaseBdev3 00:09:00.392 [2024-12-06 09:46:25.556695] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:00.392 [2024-12-06 09:46:25.556743] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:00.392 [2024-12-06 09:46:25.556939] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.392 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.392 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:00.392 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:00.392 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:00.393 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:00.393 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:00.393 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:00.393 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:00.393 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.393 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.393 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.393 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:00.393 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.393 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.393 [ 00:09:00.393 { 00:09:00.393 "name": "BaseBdev3", 00:09:00.393 "aliases": [ 00:09:00.393 "b246a780-50e6-4b4a-98e0-aa2ebacffee1" 00:09:00.393 ], 00:09:00.393 "product_name": "Malloc disk", 00:09:00.393 "block_size": 512, 00:09:00.393 "num_blocks": 65536, 00:09:00.393 "uuid": "b246a780-50e6-4b4a-98e0-aa2ebacffee1", 00:09:00.393 "assigned_rate_limits": { 00:09:00.393 "rw_ios_per_sec": 0, 00:09:00.393 "rw_mbytes_per_sec": 0, 00:09:00.393 "r_mbytes_per_sec": 0, 00:09:00.393 "w_mbytes_per_sec": 0 00:09:00.393 }, 00:09:00.393 "claimed": true, 00:09:00.393 "claim_type": "exclusive_write", 00:09:00.393 "zoned": false, 00:09:00.393 "supported_io_types": { 00:09:00.393 "read": true, 00:09:00.393 "write": true, 00:09:00.393 "unmap": true, 00:09:00.393 "flush": true, 00:09:00.393 "reset": true, 00:09:00.393 "nvme_admin": false, 00:09:00.393 "nvme_io": false, 00:09:00.393 "nvme_io_md": false, 00:09:00.393 "write_zeroes": true, 00:09:00.393 "zcopy": true, 00:09:00.393 "get_zone_info": false, 00:09:00.393 "zone_management": false, 00:09:00.393 "zone_append": false, 00:09:00.393 "compare": false, 00:09:00.393 "compare_and_write": false, 00:09:00.393 "abort": true, 00:09:00.393 "seek_hole": false, 00:09:00.393 "seek_data": false, 00:09:00.393 "copy": true, 00:09:00.393 "nvme_iov_md": false 00:09:00.393 }, 00:09:00.393 "memory_domains": [ 00:09:00.393 { 00:09:00.393 "dma_device_id": "system", 00:09:00.393 "dma_device_type": 1 00:09:00.393 }, 00:09:00.393 { 00:09:00.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.393 "dma_device_type": 2 00:09:00.393 } 00:09:00.393 ], 00:09:00.393 "driver_specific": {} 00:09:00.393 } 00:09:00.393 ] 00:09:00.393 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.393 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:00.393 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:00.393 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:00.393 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:00.393 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.393 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.393 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:00.393 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.393 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.393 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.393 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.393 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.393 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.393 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.393 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.393 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.393 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.393 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.393 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.393 "name": "Existed_Raid", 00:09:00.393 "uuid": "62f763bb-0d89-42a7-a1e2-3b5d7cc40eb1", 00:09:00.393 "strip_size_kb": 64, 00:09:00.393 "state": "online", 00:09:00.393 "raid_level": "concat", 00:09:00.393 "superblock": true, 00:09:00.393 "num_base_bdevs": 3, 00:09:00.393 "num_base_bdevs_discovered": 3, 00:09:00.393 "num_base_bdevs_operational": 3, 00:09:00.393 "base_bdevs_list": [ 00:09:00.393 { 00:09:00.393 "name": "BaseBdev1", 00:09:00.393 "uuid": "b430a796-59b3-40d3-995b-6181bba60198", 00:09:00.393 "is_configured": true, 00:09:00.393 "data_offset": 2048, 00:09:00.393 "data_size": 63488 00:09:00.393 }, 00:09:00.393 { 00:09:00.393 "name": "BaseBdev2", 00:09:00.393 "uuid": "93ca2c56-4aff-43bb-b76e-38a2b9275e42", 00:09:00.393 "is_configured": true, 00:09:00.393 "data_offset": 2048, 00:09:00.393 "data_size": 63488 00:09:00.393 }, 00:09:00.393 { 00:09:00.393 "name": "BaseBdev3", 00:09:00.393 "uuid": "b246a780-50e6-4b4a-98e0-aa2ebacffee1", 00:09:00.393 "is_configured": true, 00:09:00.393 "data_offset": 2048, 00:09:00.393 "data_size": 63488 00:09:00.393 } 00:09:00.393 ] 00:09:00.393 }' 00:09:00.393 09:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.393 09:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.963 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:00.963 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:00.963 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:00.963 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:00.963 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:00.963 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:00.963 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:00.963 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:00.963 09:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.963 09:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.963 [2024-12-06 09:46:26.107258] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:00.963 09:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.963 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:00.963 "name": "Existed_Raid", 00:09:00.963 "aliases": [ 00:09:00.963 "62f763bb-0d89-42a7-a1e2-3b5d7cc40eb1" 00:09:00.963 ], 00:09:00.963 "product_name": "Raid Volume", 00:09:00.963 "block_size": 512, 00:09:00.963 "num_blocks": 190464, 00:09:00.963 "uuid": "62f763bb-0d89-42a7-a1e2-3b5d7cc40eb1", 00:09:00.963 "assigned_rate_limits": { 00:09:00.963 "rw_ios_per_sec": 0, 00:09:00.963 "rw_mbytes_per_sec": 0, 00:09:00.963 "r_mbytes_per_sec": 0, 00:09:00.963 "w_mbytes_per_sec": 0 00:09:00.963 }, 00:09:00.963 "claimed": false, 00:09:00.963 "zoned": false, 00:09:00.963 "supported_io_types": { 00:09:00.963 "read": true, 00:09:00.963 "write": true, 00:09:00.963 "unmap": true, 00:09:00.963 "flush": true, 00:09:00.963 "reset": true, 00:09:00.963 "nvme_admin": false, 00:09:00.963 "nvme_io": false, 00:09:00.963 "nvme_io_md": false, 00:09:00.963 "write_zeroes": true, 00:09:00.963 "zcopy": false, 00:09:00.963 "get_zone_info": false, 00:09:00.963 "zone_management": false, 00:09:00.963 "zone_append": false, 00:09:00.963 "compare": false, 00:09:00.963 "compare_and_write": false, 00:09:00.963 "abort": false, 00:09:00.963 "seek_hole": false, 00:09:00.963 "seek_data": false, 00:09:00.963 "copy": false, 00:09:00.963 "nvme_iov_md": false 00:09:00.963 }, 00:09:00.963 "memory_domains": [ 00:09:00.963 { 00:09:00.963 "dma_device_id": "system", 00:09:00.963 "dma_device_type": 1 00:09:00.963 }, 00:09:00.963 { 00:09:00.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.963 "dma_device_type": 2 00:09:00.963 }, 00:09:00.963 { 00:09:00.963 "dma_device_id": "system", 00:09:00.963 "dma_device_type": 1 00:09:00.963 }, 00:09:00.963 { 00:09:00.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.963 "dma_device_type": 2 00:09:00.963 }, 00:09:00.963 { 00:09:00.963 "dma_device_id": "system", 00:09:00.963 "dma_device_type": 1 00:09:00.963 }, 00:09:00.963 { 00:09:00.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.963 "dma_device_type": 2 00:09:00.963 } 00:09:00.963 ], 00:09:00.963 "driver_specific": { 00:09:00.963 "raid": { 00:09:00.963 "uuid": "62f763bb-0d89-42a7-a1e2-3b5d7cc40eb1", 00:09:00.963 "strip_size_kb": 64, 00:09:00.963 "state": "online", 00:09:00.963 "raid_level": "concat", 00:09:00.963 "superblock": true, 00:09:00.963 "num_base_bdevs": 3, 00:09:00.963 "num_base_bdevs_discovered": 3, 00:09:00.963 "num_base_bdevs_operational": 3, 00:09:00.963 "base_bdevs_list": [ 00:09:00.963 { 00:09:00.963 "name": "BaseBdev1", 00:09:00.963 "uuid": "b430a796-59b3-40d3-995b-6181bba60198", 00:09:00.963 "is_configured": true, 00:09:00.963 "data_offset": 2048, 00:09:00.963 "data_size": 63488 00:09:00.963 }, 00:09:00.963 { 00:09:00.963 "name": "BaseBdev2", 00:09:00.963 "uuid": "93ca2c56-4aff-43bb-b76e-38a2b9275e42", 00:09:00.963 "is_configured": true, 00:09:00.963 "data_offset": 2048, 00:09:00.963 "data_size": 63488 00:09:00.963 }, 00:09:00.963 { 00:09:00.963 "name": "BaseBdev3", 00:09:00.963 "uuid": "b246a780-50e6-4b4a-98e0-aa2ebacffee1", 00:09:00.963 "is_configured": true, 00:09:00.963 "data_offset": 2048, 00:09:00.963 "data_size": 63488 00:09:00.963 } 00:09:00.963 ] 00:09:00.963 } 00:09:00.963 } 00:09:00.963 }' 00:09:00.963 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:00.963 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:00.963 BaseBdev2 00:09:00.963 BaseBdev3' 00:09:00.963 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.223 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:01.223 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.223 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:01.223 09:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.223 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.223 09:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.223 09:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.223 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.223 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.223 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.223 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:01.223 09:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.223 09:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.223 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.223 09:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.223 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.223 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.223 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.223 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.223 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:01.223 09:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.223 09:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.223 09:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.223 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.223 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.223 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:01.223 09:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.223 09:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.223 [2024-12-06 09:46:26.406454] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:01.223 [2024-12-06 09:46:26.406532] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:01.224 [2024-12-06 09:46:26.406613] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:01.483 09:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.483 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:01.483 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:01.483 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:01.483 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:01.483 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:01.483 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:01.483 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.483 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:01.483 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.483 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.483 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:01.483 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.483 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.483 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.483 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.483 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.483 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.483 09:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.483 09:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.483 09:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.483 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.483 "name": "Existed_Raid", 00:09:01.483 "uuid": "62f763bb-0d89-42a7-a1e2-3b5d7cc40eb1", 00:09:01.483 "strip_size_kb": 64, 00:09:01.483 "state": "offline", 00:09:01.483 "raid_level": "concat", 00:09:01.483 "superblock": true, 00:09:01.483 "num_base_bdevs": 3, 00:09:01.483 "num_base_bdevs_discovered": 2, 00:09:01.483 "num_base_bdevs_operational": 2, 00:09:01.483 "base_bdevs_list": [ 00:09:01.483 { 00:09:01.483 "name": null, 00:09:01.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.483 "is_configured": false, 00:09:01.483 "data_offset": 0, 00:09:01.483 "data_size": 63488 00:09:01.483 }, 00:09:01.483 { 00:09:01.483 "name": "BaseBdev2", 00:09:01.483 "uuid": "93ca2c56-4aff-43bb-b76e-38a2b9275e42", 00:09:01.483 "is_configured": true, 00:09:01.483 "data_offset": 2048, 00:09:01.483 "data_size": 63488 00:09:01.483 }, 00:09:01.483 { 00:09:01.483 "name": "BaseBdev3", 00:09:01.483 "uuid": "b246a780-50e6-4b4a-98e0-aa2ebacffee1", 00:09:01.483 "is_configured": true, 00:09:01.483 "data_offset": 2048, 00:09:01.483 "data_size": 63488 00:09:01.483 } 00:09:01.483 ] 00:09:01.483 }' 00:09:01.483 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.483 09:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.743 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:01.743 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:01.743 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.743 09:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:01.743 09:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.743 09:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.743 09:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.743 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:01.743 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:01.743 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:01.743 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.743 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.004 [2024-12-06 09:46:27.020636] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:02.004 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.004 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:02.004 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:02.004 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.004 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.004 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.004 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:02.004 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.004 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:02.004 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:02.004 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:02.004 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.004 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.004 [2024-12-06 09:46:27.187115] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:02.004 [2024-12-06 09:46:27.187248] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:02.264 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.264 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:02.264 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:02.264 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.264 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.264 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:02.264 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.264 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.264 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:02.264 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:02.264 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:02.264 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:02.264 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:02.264 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:02.264 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.264 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.264 BaseBdev2 00:09:02.264 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.264 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:02.264 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:02.264 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:02.264 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:02.264 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:02.264 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:02.264 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:02.264 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.264 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.264 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.264 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:02.264 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.264 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.264 [ 00:09:02.264 { 00:09:02.264 "name": "BaseBdev2", 00:09:02.264 "aliases": [ 00:09:02.264 "c877fc50-95d5-4270-80f1-08d76d8c136f" 00:09:02.264 ], 00:09:02.264 "product_name": "Malloc disk", 00:09:02.264 "block_size": 512, 00:09:02.264 "num_blocks": 65536, 00:09:02.264 "uuid": "c877fc50-95d5-4270-80f1-08d76d8c136f", 00:09:02.264 "assigned_rate_limits": { 00:09:02.264 "rw_ios_per_sec": 0, 00:09:02.264 "rw_mbytes_per_sec": 0, 00:09:02.264 "r_mbytes_per_sec": 0, 00:09:02.264 "w_mbytes_per_sec": 0 00:09:02.264 }, 00:09:02.264 "claimed": false, 00:09:02.264 "zoned": false, 00:09:02.264 "supported_io_types": { 00:09:02.264 "read": true, 00:09:02.264 "write": true, 00:09:02.264 "unmap": true, 00:09:02.264 "flush": true, 00:09:02.264 "reset": true, 00:09:02.264 "nvme_admin": false, 00:09:02.264 "nvme_io": false, 00:09:02.264 "nvme_io_md": false, 00:09:02.264 "write_zeroes": true, 00:09:02.264 "zcopy": true, 00:09:02.264 "get_zone_info": false, 00:09:02.264 "zone_management": false, 00:09:02.264 "zone_append": false, 00:09:02.264 "compare": false, 00:09:02.264 "compare_and_write": false, 00:09:02.264 "abort": true, 00:09:02.264 "seek_hole": false, 00:09:02.264 "seek_data": false, 00:09:02.264 "copy": true, 00:09:02.264 "nvme_iov_md": false 00:09:02.264 }, 00:09:02.264 "memory_domains": [ 00:09:02.264 { 00:09:02.264 "dma_device_id": "system", 00:09:02.265 "dma_device_type": 1 00:09:02.265 }, 00:09:02.265 { 00:09:02.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.265 "dma_device_type": 2 00:09:02.265 } 00:09:02.265 ], 00:09:02.265 "driver_specific": {} 00:09:02.265 } 00:09:02.265 ] 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.265 BaseBdev3 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.265 [ 00:09:02.265 { 00:09:02.265 "name": "BaseBdev3", 00:09:02.265 "aliases": [ 00:09:02.265 "72488d7f-5f24-4d1e-8f56-2c6c994d887c" 00:09:02.265 ], 00:09:02.265 "product_name": "Malloc disk", 00:09:02.265 "block_size": 512, 00:09:02.265 "num_blocks": 65536, 00:09:02.265 "uuid": "72488d7f-5f24-4d1e-8f56-2c6c994d887c", 00:09:02.265 "assigned_rate_limits": { 00:09:02.265 "rw_ios_per_sec": 0, 00:09:02.265 "rw_mbytes_per_sec": 0, 00:09:02.265 "r_mbytes_per_sec": 0, 00:09:02.265 "w_mbytes_per_sec": 0 00:09:02.265 }, 00:09:02.265 "claimed": false, 00:09:02.265 "zoned": false, 00:09:02.265 "supported_io_types": { 00:09:02.265 "read": true, 00:09:02.265 "write": true, 00:09:02.265 "unmap": true, 00:09:02.265 "flush": true, 00:09:02.265 "reset": true, 00:09:02.265 "nvme_admin": false, 00:09:02.265 "nvme_io": false, 00:09:02.265 "nvme_io_md": false, 00:09:02.265 "write_zeroes": true, 00:09:02.265 "zcopy": true, 00:09:02.265 "get_zone_info": false, 00:09:02.265 "zone_management": false, 00:09:02.265 "zone_append": false, 00:09:02.265 "compare": false, 00:09:02.265 "compare_and_write": false, 00:09:02.265 "abort": true, 00:09:02.265 "seek_hole": false, 00:09:02.265 "seek_data": false, 00:09:02.265 "copy": true, 00:09:02.265 "nvme_iov_md": false 00:09:02.265 }, 00:09:02.265 "memory_domains": [ 00:09:02.265 { 00:09:02.265 "dma_device_id": "system", 00:09:02.265 "dma_device_type": 1 00:09:02.265 }, 00:09:02.265 { 00:09:02.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.265 "dma_device_type": 2 00:09:02.265 } 00:09:02.265 ], 00:09:02.265 "driver_specific": {} 00:09:02.265 } 00:09:02.265 ] 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.265 [2024-12-06 09:46:27.511986] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:02.265 [2024-12-06 09:46:27.512105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:02.265 [2024-12-06 09:46:27.512169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:02.265 [2024-12-06 09:46:27.514408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.265 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.523 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.523 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.523 "name": "Existed_Raid", 00:09:02.523 "uuid": "7a959b73-36f6-42d6-868f-41e277e86fdc", 00:09:02.523 "strip_size_kb": 64, 00:09:02.523 "state": "configuring", 00:09:02.523 "raid_level": "concat", 00:09:02.523 "superblock": true, 00:09:02.523 "num_base_bdevs": 3, 00:09:02.523 "num_base_bdevs_discovered": 2, 00:09:02.523 "num_base_bdevs_operational": 3, 00:09:02.523 "base_bdevs_list": [ 00:09:02.523 { 00:09:02.523 "name": "BaseBdev1", 00:09:02.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.523 "is_configured": false, 00:09:02.523 "data_offset": 0, 00:09:02.523 "data_size": 0 00:09:02.523 }, 00:09:02.523 { 00:09:02.523 "name": "BaseBdev2", 00:09:02.523 "uuid": "c877fc50-95d5-4270-80f1-08d76d8c136f", 00:09:02.523 "is_configured": true, 00:09:02.523 "data_offset": 2048, 00:09:02.523 "data_size": 63488 00:09:02.523 }, 00:09:02.523 { 00:09:02.523 "name": "BaseBdev3", 00:09:02.523 "uuid": "72488d7f-5f24-4d1e-8f56-2c6c994d887c", 00:09:02.523 "is_configured": true, 00:09:02.523 "data_offset": 2048, 00:09:02.523 "data_size": 63488 00:09:02.523 } 00:09:02.523 ] 00:09:02.523 }' 00:09:02.523 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.523 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.781 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:02.781 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.781 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.781 [2024-12-06 09:46:27.911920] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:02.781 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.781 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:02.781 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.781 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.781 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:02.781 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.781 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.781 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.781 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.781 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.781 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.781 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.781 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.781 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.781 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.781 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.781 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.781 "name": "Existed_Raid", 00:09:02.781 "uuid": "7a959b73-36f6-42d6-868f-41e277e86fdc", 00:09:02.781 "strip_size_kb": 64, 00:09:02.781 "state": "configuring", 00:09:02.781 "raid_level": "concat", 00:09:02.781 "superblock": true, 00:09:02.781 "num_base_bdevs": 3, 00:09:02.781 "num_base_bdevs_discovered": 1, 00:09:02.781 "num_base_bdevs_operational": 3, 00:09:02.781 "base_bdevs_list": [ 00:09:02.781 { 00:09:02.781 "name": "BaseBdev1", 00:09:02.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.781 "is_configured": false, 00:09:02.781 "data_offset": 0, 00:09:02.781 "data_size": 0 00:09:02.781 }, 00:09:02.781 { 00:09:02.781 "name": null, 00:09:02.781 "uuid": "c877fc50-95d5-4270-80f1-08d76d8c136f", 00:09:02.781 "is_configured": false, 00:09:02.781 "data_offset": 0, 00:09:02.781 "data_size": 63488 00:09:02.781 }, 00:09:02.781 { 00:09:02.781 "name": "BaseBdev3", 00:09:02.781 "uuid": "72488d7f-5f24-4d1e-8f56-2c6c994d887c", 00:09:02.781 "is_configured": true, 00:09:02.781 "data_offset": 2048, 00:09:02.781 "data_size": 63488 00:09:02.781 } 00:09:02.781 ] 00:09:02.781 }' 00:09:02.781 09:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.781 09:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.039 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.039 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.039 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:03.039 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.296 [2024-12-06 09:46:28.383514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:03.296 BaseBdev1 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.296 [ 00:09:03.296 { 00:09:03.296 "name": "BaseBdev1", 00:09:03.296 "aliases": [ 00:09:03.296 "6241228b-e541-44a6-b459-b56e03f6fcf4" 00:09:03.296 ], 00:09:03.296 "product_name": "Malloc disk", 00:09:03.296 "block_size": 512, 00:09:03.296 "num_blocks": 65536, 00:09:03.296 "uuid": "6241228b-e541-44a6-b459-b56e03f6fcf4", 00:09:03.296 "assigned_rate_limits": { 00:09:03.296 "rw_ios_per_sec": 0, 00:09:03.296 "rw_mbytes_per_sec": 0, 00:09:03.296 "r_mbytes_per_sec": 0, 00:09:03.296 "w_mbytes_per_sec": 0 00:09:03.296 }, 00:09:03.296 "claimed": true, 00:09:03.296 "claim_type": "exclusive_write", 00:09:03.296 "zoned": false, 00:09:03.296 "supported_io_types": { 00:09:03.296 "read": true, 00:09:03.296 "write": true, 00:09:03.296 "unmap": true, 00:09:03.296 "flush": true, 00:09:03.296 "reset": true, 00:09:03.296 "nvme_admin": false, 00:09:03.296 "nvme_io": false, 00:09:03.296 "nvme_io_md": false, 00:09:03.296 "write_zeroes": true, 00:09:03.296 "zcopy": true, 00:09:03.296 "get_zone_info": false, 00:09:03.296 "zone_management": false, 00:09:03.296 "zone_append": false, 00:09:03.296 "compare": false, 00:09:03.296 "compare_and_write": false, 00:09:03.296 "abort": true, 00:09:03.296 "seek_hole": false, 00:09:03.296 "seek_data": false, 00:09:03.296 "copy": true, 00:09:03.296 "nvme_iov_md": false 00:09:03.296 }, 00:09:03.296 "memory_domains": [ 00:09:03.296 { 00:09:03.296 "dma_device_id": "system", 00:09:03.296 "dma_device_type": 1 00:09:03.296 }, 00:09:03.296 { 00:09:03.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.296 "dma_device_type": 2 00:09:03.296 } 00:09:03.296 ], 00:09:03.296 "driver_specific": {} 00:09:03.296 } 00:09:03.296 ] 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.296 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.296 "name": "Existed_Raid", 00:09:03.297 "uuid": "7a959b73-36f6-42d6-868f-41e277e86fdc", 00:09:03.297 "strip_size_kb": 64, 00:09:03.297 "state": "configuring", 00:09:03.297 "raid_level": "concat", 00:09:03.297 "superblock": true, 00:09:03.297 "num_base_bdevs": 3, 00:09:03.297 "num_base_bdevs_discovered": 2, 00:09:03.297 "num_base_bdevs_operational": 3, 00:09:03.297 "base_bdevs_list": [ 00:09:03.297 { 00:09:03.297 "name": "BaseBdev1", 00:09:03.297 "uuid": "6241228b-e541-44a6-b459-b56e03f6fcf4", 00:09:03.297 "is_configured": true, 00:09:03.297 "data_offset": 2048, 00:09:03.297 "data_size": 63488 00:09:03.297 }, 00:09:03.297 { 00:09:03.297 "name": null, 00:09:03.297 "uuid": "c877fc50-95d5-4270-80f1-08d76d8c136f", 00:09:03.297 "is_configured": false, 00:09:03.297 "data_offset": 0, 00:09:03.297 "data_size": 63488 00:09:03.297 }, 00:09:03.297 { 00:09:03.297 "name": "BaseBdev3", 00:09:03.297 "uuid": "72488d7f-5f24-4d1e-8f56-2c6c994d887c", 00:09:03.297 "is_configured": true, 00:09:03.297 "data_offset": 2048, 00:09:03.297 "data_size": 63488 00:09:03.297 } 00:09:03.297 ] 00:09:03.297 }' 00:09:03.297 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.297 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.555 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:03.555 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.555 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.555 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.555 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.555 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:03.555 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:03.555 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.555 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.555 [2024-12-06 09:46:28.815131] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:03.555 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.555 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:03.555 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.555 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.555 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.555 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.555 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.555 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.555 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.555 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.555 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.555 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.555 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.555 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.555 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.813 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.813 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.813 "name": "Existed_Raid", 00:09:03.813 "uuid": "7a959b73-36f6-42d6-868f-41e277e86fdc", 00:09:03.813 "strip_size_kb": 64, 00:09:03.813 "state": "configuring", 00:09:03.813 "raid_level": "concat", 00:09:03.813 "superblock": true, 00:09:03.813 "num_base_bdevs": 3, 00:09:03.813 "num_base_bdevs_discovered": 1, 00:09:03.813 "num_base_bdevs_operational": 3, 00:09:03.813 "base_bdevs_list": [ 00:09:03.813 { 00:09:03.813 "name": "BaseBdev1", 00:09:03.813 "uuid": "6241228b-e541-44a6-b459-b56e03f6fcf4", 00:09:03.813 "is_configured": true, 00:09:03.813 "data_offset": 2048, 00:09:03.813 "data_size": 63488 00:09:03.813 }, 00:09:03.813 { 00:09:03.813 "name": null, 00:09:03.813 "uuid": "c877fc50-95d5-4270-80f1-08d76d8c136f", 00:09:03.813 "is_configured": false, 00:09:03.813 "data_offset": 0, 00:09:03.813 "data_size": 63488 00:09:03.813 }, 00:09:03.813 { 00:09:03.813 "name": null, 00:09:03.813 "uuid": "72488d7f-5f24-4d1e-8f56-2c6c994d887c", 00:09:03.813 "is_configured": false, 00:09:03.813 "data_offset": 0, 00:09:03.813 "data_size": 63488 00:09:03.813 } 00:09:03.813 ] 00:09:03.813 }' 00:09:03.813 09:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.813 09:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.071 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.071 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:04.071 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.071 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.071 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.071 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:04.071 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:04.071 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.071 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.071 [2024-12-06 09:46:29.262399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:04.071 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.071 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:04.071 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.071 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.071 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.071 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.071 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.071 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.071 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.071 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.071 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.071 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.071 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.071 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.071 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.071 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.071 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.071 "name": "Existed_Raid", 00:09:04.071 "uuid": "7a959b73-36f6-42d6-868f-41e277e86fdc", 00:09:04.071 "strip_size_kb": 64, 00:09:04.071 "state": "configuring", 00:09:04.071 "raid_level": "concat", 00:09:04.071 "superblock": true, 00:09:04.071 "num_base_bdevs": 3, 00:09:04.071 "num_base_bdevs_discovered": 2, 00:09:04.071 "num_base_bdevs_operational": 3, 00:09:04.071 "base_bdevs_list": [ 00:09:04.071 { 00:09:04.071 "name": "BaseBdev1", 00:09:04.071 "uuid": "6241228b-e541-44a6-b459-b56e03f6fcf4", 00:09:04.071 "is_configured": true, 00:09:04.071 "data_offset": 2048, 00:09:04.071 "data_size": 63488 00:09:04.071 }, 00:09:04.071 { 00:09:04.071 "name": null, 00:09:04.071 "uuid": "c877fc50-95d5-4270-80f1-08d76d8c136f", 00:09:04.071 "is_configured": false, 00:09:04.071 "data_offset": 0, 00:09:04.071 "data_size": 63488 00:09:04.071 }, 00:09:04.071 { 00:09:04.071 "name": "BaseBdev3", 00:09:04.071 "uuid": "72488d7f-5f24-4d1e-8f56-2c6c994d887c", 00:09:04.071 "is_configured": true, 00:09:04.071 "data_offset": 2048, 00:09:04.071 "data_size": 63488 00:09:04.071 } 00:09:04.071 ] 00:09:04.071 }' 00:09:04.071 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.071 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.635 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.635 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:04.635 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.635 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.635 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.635 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:04.635 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:04.635 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.635 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.635 [2024-12-06 09:46:29.682350] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:04.635 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.635 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:04.635 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.635 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.635 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.635 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.635 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.635 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.635 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.635 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.635 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.635 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.635 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.635 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.635 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.635 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.635 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.635 "name": "Existed_Raid", 00:09:04.635 "uuid": "7a959b73-36f6-42d6-868f-41e277e86fdc", 00:09:04.635 "strip_size_kb": 64, 00:09:04.635 "state": "configuring", 00:09:04.635 "raid_level": "concat", 00:09:04.635 "superblock": true, 00:09:04.635 "num_base_bdevs": 3, 00:09:04.635 "num_base_bdevs_discovered": 1, 00:09:04.635 "num_base_bdevs_operational": 3, 00:09:04.635 "base_bdevs_list": [ 00:09:04.635 { 00:09:04.635 "name": null, 00:09:04.635 "uuid": "6241228b-e541-44a6-b459-b56e03f6fcf4", 00:09:04.635 "is_configured": false, 00:09:04.635 "data_offset": 0, 00:09:04.635 "data_size": 63488 00:09:04.635 }, 00:09:04.635 { 00:09:04.635 "name": null, 00:09:04.635 "uuid": "c877fc50-95d5-4270-80f1-08d76d8c136f", 00:09:04.635 "is_configured": false, 00:09:04.635 "data_offset": 0, 00:09:04.635 "data_size": 63488 00:09:04.635 }, 00:09:04.635 { 00:09:04.635 "name": "BaseBdev3", 00:09:04.635 "uuid": "72488d7f-5f24-4d1e-8f56-2c6c994d887c", 00:09:04.635 "is_configured": true, 00:09:04.635 "data_offset": 2048, 00:09:04.635 "data_size": 63488 00:09:04.635 } 00:09:04.635 ] 00:09:04.635 }' 00:09:04.635 09:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.635 09:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.893 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.893 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:04.893 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.893 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.893 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.149 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:05.149 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:05.149 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.149 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.149 [2024-12-06 09:46:30.187313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:05.149 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.149 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:05.149 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.149 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.149 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.149 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.149 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.149 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.149 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.149 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.149 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.149 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.150 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.150 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.150 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.150 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.150 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.150 "name": "Existed_Raid", 00:09:05.150 "uuid": "7a959b73-36f6-42d6-868f-41e277e86fdc", 00:09:05.150 "strip_size_kb": 64, 00:09:05.150 "state": "configuring", 00:09:05.150 "raid_level": "concat", 00:09:05.150 "superblock": true, 00:09:05.150 "num_base_bdevs": 3, 00:09:05.150 "num_base_bdevs_discovered": 2, 00:09:05.150 "num_base_bdevs_operational": 3, 00:09:05.150 "base_bdevs_list": [ 00:09:05.150 { 00:09:05.150 "name": null, 00:09:05.150 "uuid": "6241228b-e541-44a6-b459-b56e03f6fcf4", 00:09:05.150 "is_configured": false, 00:09:05.150 "data_offset": 0, 00:09:05.150 "data_size": 63488 00:09:05.150 }, 00:09:05.150 { 00:09:05.150 "name": "BaseBdev2", 00:09:05.150 "uuid": "c877fc50-95d5-4270-80f1-08d76d8c136f", 00:09:05.150 "is_configured": true, 00:09:05.150 "data_offset": 2048, 00:09:05.150 "data_size": 63488 00:09:05.150 }, 00:09:05.150 { 00:09:05.150 "name": "BaseBdev3", 00:09:05.150 "uuid": "72488d7f-5f24-4d1e-8f56-2c6c994d887c", 00:09:05.150 "is_configured": true, 00:09:05.150 "data_offset": 2048, 00:09:05.150 "data_size": 63488 00:09:05.150 } 00:09:05.150 ] 00:09:05.150 }' 00:09:05.150 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.150 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.407 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.407 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:05.407 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.407 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.407 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.407 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:05.407 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.407 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:05.407 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.407 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.407 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.407 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6241228b-e541-44a6-b459-b56e03f6fcf4 00:09:05.407 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.407 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.672 [2024-12-06 09:46:30.686192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:05.672 [2024-12-06 09:46:30.686525] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:05.672 [2024-12-06 09:46:30.686576] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:05.672 [2024-12-06 09:46:30.686899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:05.672 NewBaseBdev 00:09:05.672 [2024-12-06 09:46:30.687103] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:05.672 [2024-12-06 09:46:30.687123] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:05.672 [2024-12-06 09:46:30.687271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.672 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.672 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:05.672 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:05.672 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:05.672 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:05.672 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:05.672 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:05.672 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:05.672 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.672 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.672 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.672 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:05.672 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.672 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.672 [ 00:09:05.672 { 00:09:05.672 "name": "NewBaseBdev", 00:09:05.672 "aliases": [ 00:09:05.672 "6241228b-e541-44a6-b459-b56e03f6fcf4" 00:09:05.672 ], 00:09:05.672 "product_name": "Malloc disk", 00:09:05.672 "block_size": 512, 00:09:05.672 "num_blocks": 65536, 00:09:05.672 "uuid": "6241228b-e541-44a6-b459-b56e03f6fcf4", 00:09:05.672 "assigned_rate_limits": { 00:09:05.672 "rw_ios_per_sec": 0, 00:09:05.672 "rw_mbytes_per_sec": 0, 00:09:05.672 "r_mbytes_per_sec": 0, 00:09:05.672 "w_mbytes_per_sec": 0 00:09:05.672 }, 00:09:05.672 "claimed": true, 00:09:05.672 "claim_type": "exclusive_write", 00:09:05.672 "zoned": false, 00:09:05.672 "supported_io_types": { 00:09:05.672 "read": true, 00:09:05.672 "write": true, 00:09:05.672 "unmap": true, 00:09:05.672 "flush": true, 00:09:05.672 "reset": true, 00:09:05.672 "nvme_admin": false, 00:09:05.672 "nvme_io": false, 00:09:05.672 "nvme_io_md": false, 00:09:05.672 "write_zeroes": true, 00:09:05.672 "zcopy": true, 00:09:05.672 "get_zone_info": false, 00:09:05.672 "zone_management": false, 00:09:05.672 "zone_append": false, 00:09:05.672 "compare": false, 00:09:05.672 "compare_and_write": false, 00:09:05.672 "abort": true, 00:09:05.672 "seek_hole": false, 00:09:05.672 "seek_data": false, 00:09:05.672 "copy": true, 00:09:05.672 "nvme_iov_md": false 00:09:05.672 }, 00:09:05.672 "memory_domains": [ 00:09:05.672 { 00:09:05.672 "dma_device_id": "system", 00:09:05.672 "dma_device_type": 1 00:09:05.672 }, 00:09:05.672 { 00:09:05.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.672 "dma_device_type": 2 00:09:05.672 } 00:09:05.672 ], 00:09:05.672 "driver_specific": {} 00:09:05.672 } 00:09:05.672 ] 00:09:05.672 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.672 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:05.672 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:05.672 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.672 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:05.672 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.672 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.672 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.672 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.672 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.672 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.672 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.672 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.672 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.672 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.672 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.672 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.672 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.672 "name": "Existed_Raid", 00:09:05.672 "uuid": "7a959b73-36f6-42d6-868f-41e277e86fdc", 00:09:05.672 "strip_size_kb": 64, 00:09:05.672 "state": "online", 00:09:05.672 "raid_level": "concat", 00:09:05.672 "superblock": true, 00:09:05.672 "num_base_bdevs": 3, 00:09:05.672 "num_base_bdevs_discovered": 3, 00:09:05.672 "num_base_bdevs_operational": 3, 00:09:05.672 "base_bdevs_list": [ 00:09:05.672 { 00:09:05.672 "name": "NewBaseBdev", 00:09:05.672 "uuid": "6241228b-e541-44a6-b459-b56e03f6fcf4", 00:09:05.672 "is_configured": true, 00:09:05.672 "data_offset": 2048, 00:09:05.672 "data_size": 63488 00:09:05.672 }, 00:09:05.672 { 00:09:05.672 "name": "BaseBdev2", 00:09:05.672 "uuid": "c877fc50-95d5-4270-80f1-08d76d8c136f", 00:09:05.672 "is_configured": true, 00:09:05.672 "data_offset": 2048, 00:09:05.672 "data_size": 63488 00:09:05.672 }, 00:09:05.672 { 00:09:05.672 "name": "BaseBdev3", 00:09:05.672 "uuid": "72488d7f-5f24-4d1e-8f56-2c6c994d887c", 00:09:05.672 "is_configured": true, 00:09:05.672 "data_offset": 2048, 00:09:05.672 "data_size": 63488 00:09:05.672 } 00:09:05.672 ] 00:09:05.672 }' 00:09:05.672 09:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.672 09:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.947 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:05.947 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:05.947 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:05.947 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:05.947 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:05.947 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:05.947 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:05.947 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:05.947 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.947 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.947 [2024-12-06 09:46:31.173703] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:05.947 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.947 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:05.947 "name": "Existed_Raid", 00:09:05.947 "aliases": [ 00:09:05.947 "7a959b73-36f6-42d6-868f-41e277e86fdc" 00:09:05.947 ], 00:09:05.947 "product_name": "Raid Volume", 00:09:05.947 "block_size": 512, 00:09:05.947 "num_blocks": 190464, 00:09:05.947 "uuid": "7a959b73-36f6-42d6-868f-41e277e86fdc", 00:09:05.947 "assigned_rate_limits": { 00:09:05.947 "rw_ios_per_sec": 0, 00:09:05.947 "rw_mbytes_per_sec": 0, 00:09:05.947 "r_mbytes_per_sec": 0, 00:09:05.947 "w_mbytes_per_sec": 0 00:09:05.947 }, 00:09:05.947 "claimed": false, 00:09:05.947 "zoned": false, 00:09:05.947 "supported_io_types": { 00:09:05.947 "read": true, 00:09:05.947 "write": true, 00:09:05.947 "unmap": true, 00:09:05.947 "flush": true, 00:09:05.947 "reset": true, 00:09:05.947 "nvme_admin": false, 00:09:05.947 "nvme_io": false, 00:09:05.947 "nvme_io_md": false, 00:09:05.947 "write_zeroes": true, 00:09:05.947 "zcopy": false, 00:09:05.947 "get_zone_info": false, 00:09:05.947 "zone_management": false, 00:09:05.947 "zone_append": false, 00:09:05.947 "compare": false, 00:09:05.947 "compare_and_write": false, 00:09:05.947 "abort": false, 00:09:05.947 "seek_hole": false, 00:09:05.947 "seek_data": false, 00:09:05.947 "copy": false, 00:09:05.947 "nvme_iov_md": false 00:09:05.947 }, 00:09:05.947 "memory_domains": [ 00:09:05.947 { 00:09:05.947 "dma_device_id": "system", 00:09:05.947 "dma_device_type": 1 00:09:05.947 }, 00:09:05.947 { 00:09:05.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.947 "dma_device_type": 2 00:09:05.947 }, 00:09:05.947 { 00:09:05.947 "dma_device_id": "system", 00:09:05.947 "dma_device_type": 1 00:09:05.947 }, 00:09:05.947 { 00:09:05.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.947 "dma_device_type": 2 00:09:05.947 }, 00:09:05.947 { 00:09:05.947 "dma_device_id": "system", 00:09:05.947 "dma_device_type": 1 00:09:05.947 }, 00:09:05.947 { 00:09:05.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.947 "dma_device_type": 2 00:09:05.947 } 00:09:05.947 ], 00:09:05.947 "driver_specific": { 00:09:05.947 "raid": { 00:09:05.947 "uuid": "7a959b73-36f6-42d6-868f-41e277e86fdc", 00:09:05.947 "strip_size_kb": 64, 00:09:05.947 "state": "online", 00:09:05.947 "raid_level": "concat", 00:09:05.947 "superblock": true, 00:09:05.947 "num_base_bdevs": 3, 00:09:05.947 "num_base_bdevs_discovered": 3, 00:09:05.947 "num_base_bdevs_operational": 3, 00:09:05.947 "base_bdevs_list": [ 00:09:05.947 { 00:09:05.947 "name": "NewBaseBdev", 00:09:05.947 "uuid": "6241228b-e541-44a6-b459-b56e03f6fcf4", 00:09:05.947 "is_configured": true, 00:09:05.947 "data_offset": 2048, 00:09:05.947 "data_size": 63488 00:09:05.947 }, 00:09:05.947 { 00:09:05.947 "name": "BaseBdev2", 00:09:05.947 "uuid": "c877fc50-95d5-4270-80f1-08d76d8c136f", 00:09:05.947 "is_configured": true, 00:09:05.947 "data_offset": 2048, 00:09:05.947 "data_size": 63488 00:09:05.947 }, 00:09:05.947 { 00:09:05.947 "name": "BaseBdev3", 00:09:05.947 "uuid": "72488d7f-5f24-4d1e-8f56-2c6c994d887c", 00:09:05.947 "is_configured": true, 00:09:05.947 "data_offset": 2048, 00:09:05.947 "data_size": 63488 00:09:05.947 } 00:09:05.947 ] 00:09:05.947 } 00:09:05.947 } 00:09:05.947 }' 00:09:05.947 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:06.206 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:06.206 BaseBdev2 00:09:06.206 BaseBdev3' 00:09:06.206 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.206 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:06.206 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.206 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:06.206 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.206 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.206 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.206 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.206 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.206 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.206 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.206 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.206 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:06.206 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.206 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.207 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.207 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.207 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.207 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.207 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:06.207 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.207 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.207 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.207 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.207 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.207 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.207 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:06.207 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.207 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.207 [2024-12-06 09:46:31.452932] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:06.207 [2024-12-06 09:46:31.453004] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:06.207 [2024-12-06 09:46:31.453148] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:06.207 [2024-12-06 09:46:31.453253] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:06.207 [2024-12-06 09:46:31.453312] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:06.207 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.207 09:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66190 00:09:06.207 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66190 ']' 00:09:06.207 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66190 00:09:06.207 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:06.207 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:06.207 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66190 00:09:06.466 killing process with pid 66190 00:09:06.466 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:06.466 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:06.466 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66190' 00:09:06.466 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66190 00:09:06.466 [2024-12-06 09:46:31.491470] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:06.466 09:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66190 00:09:06.724 [2024-12-06 09:46:31.793866] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:07.662 09:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:07.662 00:09:07.662 real 0m10.343s 00:09:07.662 user 0m16.500s 00:09:07.662 sys 0m1.546s 00:09:07.662 ************************************ 00:09:07.662 END TEST raid_state_function_test_sb 00:09:07.662 ************************************ 00:09:07.662 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.662 09:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.922 09:46:32 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:07.922 09:46:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:07.922 09:46:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.922 09:46:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:07.922 ************************************ 00:09:07.922 START TEST raid_superblock_test 00:09:07.922 ************************************ 00:09:07.922 09:46:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:09:07.922 09:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:07.922 09:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:07.922 09:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:07.922 09:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:07.922 09:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:07.922 09:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:07.922 09:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:07.922 09:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:07.922 09:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:07.922 09:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:07.922 09:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:07.922 09:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:07.922 09:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:07.922 09:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:07.922 09:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:07.922 09:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:07.922 09:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66810 00:09:07.922 09:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66810 00:09:07.922 09:46:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:07.922 09:46:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66810 ']' 00:09:07.922 09:46:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.922 09:46:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:07.922 09:46:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.922 09:46:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:07.922 09:46:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.922 [2024-12-06 09:46:33.075372] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:09:07.923 [2024-12-06 09:46:33.075492] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66810 ] 00:09:08.183 [2024-12-06 09:46:33.248049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.183 [2024-12-06 09:46:33.362466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.442 [2024-12-06 09:46:33.565956] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.442 [2024-12-06 09:46:33.565999] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.701 09:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.701 09:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:08.701 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:08.701 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:08.701 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:08.701 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:08.701 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:08.701 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:08.701 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:08.701 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:08.701 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:08.701 09:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.701 09:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.701 malloc1 00:09:08.701 09:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.701 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:08.701 09:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.701 09:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.701 [2024-12-06 09:46:33.962730] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:08.701 [2024-12-06 09:46:33.962798] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.701 [2024-12-06 09:46:33.962820] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:08.701 [2024-12-06 09:46:33.962830] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.701 [2024-12-06 09:46:33.964974] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.701 [2024-12-06 09:46:33.965013] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:08.701 pt1 00:09:08.701 09:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.701 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:08.701 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:08.701 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:08.701 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:08.701 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:08.701 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:08.701 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:08.701 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:08.701 09:46:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:08.701 09:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.701 09:46:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.961 malloc2 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.961 [2024-12-06 09:46:34.017821] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:08.961 [2024-12-06 09:46:34.017884] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.961 [2024-12-06 09:46:34.017909] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:08.961 [2024-12-06 09:46:34.017918] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.961 [2024-12-06 09:46:34.019951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.961 [2024-12-06 09:46:34.019990] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:08.961 pt2 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.961 malloc3 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.961 [2024-12-06 09:46:34.081517] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:08.961 [2024-12-06 09:46:34.081577] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.961 [2024-12-06 09:46:34.081598] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:08.961 [2024-12-06 09:46:34.081607] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.961 [2024-12-06 09:46:34.083660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.961 [2024-12-06 09:46:34.083698] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:08.961 pt3 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.961 [2024-12-06 09:46:34.093569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:08.961 [2024-12-06 09:46:34.095340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:08.961 [2024-12-06 09:46:34.095414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:08.961 [2024-12-06 09:46:34.095585] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:08.961 [2024-12-06 09:46:34.095612] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:08.961 [2024-12-06 09:46:34.095889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:08.961 [2024-12-06 09:46:34.096074] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:08.961 [2024-12-06 09:46:34.096093] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:08.961 [2024-12-06 09:46:34.096270] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.961 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.961 "name": "raid_bdev1", 00:09:08.961 "uuid": "11231a2d-20b8-45bb-b13a-11c6d1a0ba16", 00:09:08.961 "strip_size_kb": 64, 00:09:08.962 "state": "online", 00:09:08.962 "raid_level": "concat", 00:09:08.962 "superblock": true, 00:09:08.962 "num_base_bdevs": 3, 00:09:08.962 "num_base_bdevs_discovered": 3, 00:09:08.962 "num_base_bdevs_operational": 3, 00:09:08.962 "base_bdevs_list": [ 00:09:08.962 { 00:09:08.962 "name": "pt1", 00:09:08.962 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:08.962 "is_configured": true, 00:09:08.962 "data_offset": 2048, 00:09:08.962 "data_size": 63488 00:09:08.962 }, 00:09:08.962 { 00:09:08.962 "name": "pt2", 00:09:08.962 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:08.962 "is_configured": true, 00:09:08.962 "data_offset": 2048, 00:09:08.962 "data_size": 63488 00:09:08.962 }, 00:09:08.962 { 00:09:08.962 "name": "pt3", 00:09:08.962 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:08.962 "is_configured": true, 00:09:08.962 "data_offset": 2048, 00:09:08.962 "data_size": 63488 00:09:08.962 } 00:09:08.962 ] 00:09:08.962 }' 00:09:08.962 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.962 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.531 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:09.531 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:09.531 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:09.531 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:09.531 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:09.531 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:09.531 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:09.531 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.531 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.531 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:09.531 [2024-12-06 09:46:34.521109] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:09.531 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.531 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:09.531 "name": "raid_bdev1", 00:09:09.531 "aliases": [ 00:09:09.531 "11231a2d-20b8-45bb-b13a-11c6d1a0ba16" 00:09:09.531 ], 00:09:09.531 "product_name": "Raid Volume", 00:09:09.531 "block_size": 512, 00:09:09.531 "num_blocks": 190464, 00:09:09.531 "uuid": "11231a2d-20b8-45bb-b13a-11c6d1a0ba16", 00:09:09.531 "assigned_rate_limits": { 00:09:09.531 "rw_ios_per_sec": 0, 00:09:09.531 "rw_mbytes_per_sec": 0, 00:09:09.531 "r_mbytes_per_sec": 0, 00:09:09.531 "w_mbytes_per_sec": 0 00:09:09.531 }, 00:09:09.531 "claimed": false, 00:09:09.531 "zoned": false, 00:09:09.531 "supported_io_types": { 00:09:09.531 "read": true, 00:09:09.531 "write": true, 00:09:09.531 "unmap": true, 00:09:09.531 "flush": true, 00:09:09.531 "reset": true, 00:09:09.531 "nvme_admin": false, 00:09:09.531 "nvme_io": false, 00:09:09.531 "nvme_io_md": false, 00:09:09.531 "write_zeroes": true, 00:09:09.531 "zcopy": false, 00:09:09.531 "get_zone_info": false, 00:09:09.531 "zone_management": false, 00:09:09.531 "zone_append": false, 00:09:09.531 "compare": false, 00:09:09.531 "compare_and_write": false, 00:09:09.531 "abort": false, 00:09:09.531 "seek_hole": false, 00:09:09.531 "seek_data": false, 00:09:09.531 "copy": false, 00:09:09.531 "nvme_iov_md": false 00:09:09.531 }, 00:09:09.531 "memory_domains": [ 00:09:09.531 { 00:09:09.531 "dma_device_id": "system", 00:09:09.531 "dma_device_type": 1 00:09:09.531 }, 00:09:09.531 { 00:09:09.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.531 "dma_device_type": 2 00:09:09.531 }, 00:09:09.531 { 00:09:09.531 "dma_device_id": "system", 00:09:09.531 "dma_device_type": 1 00:09:09.531 }, 00:09:09.531 { 00:09:09.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.531 "dma_device_type": 2 00:09:09.531 }, 00:09:09.531 { 00:09:09.531 "dma_device_id": "system", 00:09:09.531 "dma_device_type": 1 00:09:09.531 }, 00:09:09.531 { 00:09:09.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.531 "dma_device_type": 2 00:09:09.531 } 00:09:09.531 ], 00:09:09.531 "driver_specific": { 00:09:09.531 "raid": { 00:09:09.531 "uuid": "11231a2d-20b8-45bb-b13a-11c6d1a0ba16", 00:09:09.531 "strip_size_kb": 64, 00:09:09.531 "state": "online", 00:09:09.531 "raid_level": "concat", 00:09:09.531 "superblock": true, 00:09:09.531 "num_base_bdevs": 3, 00:09:09.531 "num_base_bdevs_discovered": 3, 00:09:09.531 "num_base_bdevs_operational": 3, 00:09:09.531 "base_bdevs_list": [ 00:09:09.531 { 00:09:09.531 "name": "pt1", 00:09:09.531 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:09.531 "is_configured": true, 00:09:09.531 "data_offset": 2048, 00:09:09.531 "data_size": 63488 00:09:09.531 }, 00:09:09.531 { 00:09:09.531 "name": "pt2", 00:09:09.531 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:09.531 "is_configured": true, 00:09:09.531 "data_offset": 2048, 00:09:09.531 "data_size": 63488 00:09:09.531 }, 00:09:09.531 { 00:09:09.531 "name": "pt3", 00:09:09.531 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:09.531 "is_configured": true, 00:09:09.531 "data_offset": 2048, 00:09:09.531 "data_size": 63488 00:09:09.531 } 00:09:09.531 ] 00:09:09.531 } 00:09:09.531 } 00:09:09.531 }' 00:09:09.531 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:09.531 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:09.531 pt2 00:09:09.531 pt3' 00:09:09.531 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.531 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:09.531 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.531 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:09.531 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.531 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.531 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.531 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.532 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.532 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.532 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.532 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:09.532 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.532 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.532 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.532 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.532 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.532 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.532 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.532 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:09.532 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.532 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.532 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.532 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:09.793 [2024-12-06 09:46:34.812523] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=11231a2d-20b8-45bb-b13a-11c6d1a0ba16 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 11231a2d-20b8-45bb-b13a-11c6d1a0ba16 ']' 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.793 [2024-12-06 09:46:34.864208] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:09.793 [2024-12-06 09:46:34.864243] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:09.793 [2024-12-06 09:46:34.864325] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:09.793 [2024-12-06 09:46:34.864392] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:09.793 [2024-12-06 09:46:34.864403] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.793 09:46:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.793 [2024-12-06 09:46:35.004023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:09.793 [2024-12-06 09:46:35.005900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:09.793 [2024-12-06 09:46:35.005963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:09.793 [2024-12-06 09:46:35.006018] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:09.793 [2024-12-06 09:46:35.006077] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:09.793 [2024-12-06 09:46:35.006110] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:09.793 [2024-12-06 09:46:35.006131] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:09.793 [2024-12-06 09:46:35.006151] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:09.793 request: 00:09:09.793 { 00:09:09.793 "name": "raid_bdev1", 00:09:09.794 "raid_level": "concat", 00:09:09.794 "base_bdevs": [ 00:09:09.794 "malloc1", 00:09:09.794 "malloc2", 00:09:09.794 "malloc3" 00:09:09.794 ], 00:09:09.794 "strip_size_kb": 64, 00:09:09.794 "superblock": false, 00:09:09.794 "method": "bdev_raid_create", 00:09:09.794 "req_id": 1 00:09:09.794 } 00:09:09.794 Got JSON-RPC error response 00:09:09.794 response: 00:09:09.794 { 00:09:09.794 "code": -17, 00:09:09.794 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:09.794 } 00:09:09.794 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:09.794 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:09.794 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:09.794 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:09.794 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:09.794 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:09.794 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.794 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.794 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.794 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.794 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:09.794 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:09.794 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:09.794 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.794 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.794 [2024-12-06 09:46:35.051909] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:09.794 [2024-12-06 09:46:35.051980] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.794 [2024-12-06 09:46:35.051999] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:09.794 [2024-12-06 09:46:35.052007] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.794 [2024-12-06 09:46:35.054172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.794 [2024-12-06 09:46:35.054210] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:09.794 [2024-12-06 09:46:35.054303] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:09.794 [2024-12-06 09:46:35.054378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:09.794 pt1 00:09:09.794 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.794 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:09.794 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:09.794 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.794 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.794 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.794 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.794 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.794 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.794 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.794 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.794 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:10.052 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.052 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.052 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.052 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.052 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.052 "name": "raid_bdev1", 00:09:10.052 "uuid": "11231a2d-20b8-45bb-b13a-11c6d1a0ba16", 00:09:10.052 "strip_size_kb": 64, 00:09:10.052 "state": "configuring", 00:09:10.052 "raid_level": "concat", 00:09:10.052 "superblock": true, 00:09:10.052 "num_base_bdevs": 3, 00:09:10.052 "num_base_bdevs_discovered": 1, 00:09:10.052 "num_base_bdevs_operational": 3, 00:09:10.052 "base_bdevs_list": [ 00:09:10.052 { 00:09:10.052 "name": "pt1", 00:09:10.052 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:10.052 "is_configured": true, 00:09:10.052 "data_offset": 2048, 00:09:10.052 "data_size": 63488 00:09:10.052 }, 00:09:10.052 { 00:09:10.052 "name": null, 00:09:10.052 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:10.052 "is_configured": false, 00:09:10.052 "data_offset": 2048, 00:09:10.052 "data_size": 63488 00:09:10.052 }, 00:09:10.052 { 00:09:10.052 "name": null, 00:09:10.052 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:10.052 "is_configured": false, 00:09:10.052 "data_offset": 2048, 00:09:10.052 "data_size": 63488 00:09:10.052 } 00:09:10.052 ] 00:09:10.052 }' 00:09:10.052 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.052 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.311 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:10.311 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:10.311 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.311 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.311 [2024-12-06 09:46:35.479215] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:10.311 [2024-12-06 09:46:35.479299] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.311 [2024-12-06 09:46:35.479344] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:10.311 [2024-12-06 09:46:35.479355] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.311 [2024-12-06 09:46:35.479838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.311 [2024-12-06 09:46:35.479868] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:10.311 [2024-12-06 09:46:35.479964] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:10.311 [2024-12-06 09:46:35.480009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:10.311 pt2 00:09:10.311 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.311 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:10.311 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.311 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.311 [2024-12-06 09:46:35.491185] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:10.311 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.311 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:10.311 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:10.311 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.311 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.311 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.311 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.311 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.311 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.311 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.311 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.311 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:10.311 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.311 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.311 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.311 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.311 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.311 "name": "raid_bdev1", 00:09:10.311 "uuid": "11231a2d-20b8-45bb-b13a-11c6d1a0ba16", 00:09:10.311 "strip_size_kb": 64, 00:09:10.311 "state": "configuring", 00:09:10.311 "raid_level": "concat", 00:09:10.311 "superblock": true, 00:09:10.311 "num_base_bdevs": 3, 00:09:10.311 "num_base_bdevs_discovered": 1, 00:09:10.311 "num_base_bdevs_operational": 3, 00:09:10.311 "base_bdevs_list": [ 00:09:10.311 { 00:09:10.311 "name": "pt1", 00:09:10.311 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:10.311 "is_configured": true, 00:09:10.311 "data_offset": 2048, 00:09:10.311 "data_size": 63488 00:09:10.311 }, 00:09:10.311 { 00:09:10.311 "name": null, 00:09:10.311 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:10.311 "is_configured": false, 00:09:10.311 "data_offset": 0, 00:09:10.311 "data_size": 63488 00:09:10.311 }, 00:09:10.311 { 00:09:10.311 "name": null, 00:09:10.311 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:10.311 "is_configured": false, 00:09:10.311 "data_offset": 2048, 00:09:10.311 "data_size": 63488 00:09:10.311 } 00:09:10.311 ] 00:09:10.311 }' 00:09:10.311 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.311 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.880 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:10.880 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:10.880 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:10.880 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.880 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.880 [2024-12-06 09:46:35.922402] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:10.880 [2024-12-06 09:46:35.922473] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.880 [2024-12-06 09:46:35.922492] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:10.880 [2024-12-06 09:46:35.922503] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.880 [2024-12-06 09:46:35.923035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.880 [2024-12-06 09:46:35.923073] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:10.880 [2024-12-06 09:46:35.923184] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:10.880 [2024-12-06 09:46:35.923226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:10.880 pt2 00:09:10.880 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.880 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:10.880 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:10.880 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:10.880 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.880 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.880 [2024-12-06 09:46:35.934355] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:10.880 [2024-12-06 09:46:35.934406] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.880 [2024-12-06 09:46:35.934420] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:10.880 [2024-12-06 09:46:35.934429] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.880 [2024-12-06 09:46:35.934841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.881 [2024-12-06 09:46:35.934877] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:10.881 [2024-12-06 09:46:35.934944] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:10.881 [2024-12-06 09:46:35.934982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:10.881 [2024-12-06 09:46:35.935129] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:10.881 [2024-12-06 09:46:35.935166] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:10.881 [2024-12-06 09:46:35.935445] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:10.881 [2024-12-06 09:46:35.935621] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:10.881 [2024-12-06 09:46:35.935640] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:10.881 [2024-12-06 09:46:35.935814] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.881 pt3 00:09:10.881 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.881 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:10.881 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:10.881 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:10.881 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:10.881 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.881 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.881 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.881 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.881 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.881 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.881 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.881 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.881 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.881 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:10.881 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.881 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.881 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.881 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.881 "name": "raid_bdev1", 00:09:10.881 "uuid": "11231a2d-20b8-45bb-b13a-11c6d1a0ba16", 00:09:10.881 "strip_size_kb": 64, 00:09:10.881 "state": "online", 00:09:10.881 "raid_level": "concat", 00:09:10.881 "superblock": true, 00:09:10.881 "num_base_bdevs": 3, 00:09:10.881 "num_base_bdevs_discovered": 3, 00:09:10.881 "num_base_bdevs_operational": 3, 00:09:10.881 "base_bdevs_list": [ 00:09:10.881 { 00:09:10.881 "name": "pt1", 00:09:10.881 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:10.881 "is_configured": true, 00:09:10.881 "data_offset": 2048, 00:09:10.881 "data_size": 63488 00:09:10.881 }, 00:09:10.881 { 00:09:10.881 "name": "pt2", 00:09:10.881 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:10.881 "is_configured": true, 00:09:10.881 "data_offset": 2048, 00:09:10.881 "data_size": 63488 00:09:10.881 }, 00:09:10.881 { 00:09:10.881 "name": "pt3", 00:09:10.881 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:10.881 "is_configured": true, 00:09:10.881 "data_offset": 2048, 00:09:10.881 "data_size": 63488 00:09:10.881 } 00:09:10.881 ] 00:09:10.881 }' 00:09:10.881 09:46:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.881 09:46:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.450 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:11.450 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:11.450 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:11.450 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:11.450 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:11.450 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:11.450 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:11.450 09:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.450 09:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.450 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:11.450 [2024-12-06 09:46:36.425840] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:11.450 09:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.450 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:11.450 "name": "raid_bdev1", 00:09:11.450 "aliases": [ 00:09:11.450 "11231a2d-20b8-45bb-b13a-11c6d1a0ba16" 00:09:11.450 ], 00:09:11.450 "product_name": "Raid Volume", 00:09:11.450 "block_size": 512, 00:09:11.450 "num_blocks": 190464, 00:09:11.450 "uuid": "11231a2d-20b8-45bb-b13a-11c6d1a0ba16", 00:09:11.450 "assigned_rate_limits": { 00:09:11.450 "rw_ios_per_sec": 0, 00:09:11.450 "rw_mbytes_per_sec": 0, 00:09:11.450 "r_mbytes_per_sec": 0, 00:09:11.450 "w_mbytes_per_sec": 0 00:09:11.450 }, 00:09:11.450 "claimed": false, 00:09:11.450 "zoned": false, 00:09:11.450 "supported_io_types": { 00:09:11.450 "read": true, 00:09:11.450 "write": true, 00:09:11.450 "unmap": true, 00:09:11.450 "flush": true, 00:09:11.450 "reset": true, 00:09:11.450 "nvme_admin": false, 00:09:11.450 "nvme_io": false, 00:09:11.450 "nvme_io_md": false, 00:09:11.450 "write_zeroes": true, 00:09:11.450 "zcopy": false, 00:09:11.450 "get_zone_info": false, 00:09:11.450 "zone_management": false, 00:09:11.450 "zone_append": false, 00:09:11.450 "compare": false, 00:09:11.450 "compare_and_write": false, 00:09:11.450 "abort": false, 00:09:11.450 "seek_hole": false, 00:09:11.450 "seek_data": false, 00:09:11.450 "copy": false, 00:09:11.450 "nvme_iov_md": false 00:09:11.450 }, 00:09:11.450 "memory_domains": [ 00:09:11.450 { 00:09:11.450 "dma_device_id": "system", 00:09:11.450 "dma_device_type": 1 00:09:11.450 }, 00:09:11.450 { 00:09:11.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.450 "dma_device_type": 2 00:09:11.450 }, 00:09:11.450 { 00:09:11.450 "dma_device_id": "system", 00:09:11.450 "dma_device_type": 1 00:09:11.450 }, 00:09:11.450 { 00:09:11.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.450 "dma_device_type": 2 00:09:11.450 }, 00:09:11.450 { 00:09:11.450 "dma_device_id": "system", 00:09:11.450 "dma_device_type": 1 00:09:11.450 }, 00:09:11.450 { 00:09:11.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.450 "dma_device_type": 2 00:09:11.450 } 00:09:11.450 ], 00:09:11.450 "driver_specific": { 00:09:11.450 "raid": { 00:09:11.450 "uuid": "11231a2d-20b8-45bb-b13a-11c6d1a0ba16", 00:09:11.450 "strip_size_kb": 64, 00:09:11.450 "state": "online", 00:09:11.450 "raid_level": "concat", 00:09:11.450 "superblock": true, 00:09:11.450 "num_base_bdevs": 3, 00:09:11.450 "num_base_bdevs_discovered": 3, 00:09:11.450 "num_base_bdevs_operational": 3, 00:09:11.450 "base_bdevs_list": [ 00:09:11.450 { 00:09:11.450 "name": "pt1", 00:09:11.450 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:11.450 "is_configured": true, 00:09:11.450 "data_offset": 2048, 00:09:11.450 "data_size": 63488 00:09:11.450 }, 00:09:11.450 { 00:09:11.451 "name": "pt2", 00:09:11.451 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:11.451 "is_configured": true, 00:09:11.451 "data_offset": 2048, 00:09:11.451 "data_size": 63488 00:09:11.451 }, 00:09:11.451 { 00:09:11.451 "name": "pt3", 00:09:11.451 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:11.451 "is_configured": true, 00:09:11.451 "data_offset": 2048, 00:09:11.451 "data_size": 63488 00:09:11.451 } 00:09:11.451 ] 00:09:11.451 } 00:09:11.451 } 00:09:11.451 }' 00:09:11.451 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:11.451 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:11.451 pt2 00:09:11.451 pt3' 00:09:11.451 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.451 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:11.451 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.451 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:11.451 09:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.451 09:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.451 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.451 09:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.451 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.451 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.451 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.451 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:11.451 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.451 09:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.451 09:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.451 09:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.451 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.451 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.451 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.451 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.451 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:11.451 09:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.451 09:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.451 09:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.451 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.451 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.451 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:11.451 09:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.451 09:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.451 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:11.451 [2024-12-06 09:46:36.685499] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:11.451 09:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.728 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 11231a2d-20b8-45bb-b13a-11c6d1a0ba16 '!=' 11231a2d-20b8-45bb-b13a-11c6d1a0ba16 ']' 00:09:11.728 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:11.728 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:11.728 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:11.728 09:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66810 00:09:11.728 09:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66810 ']' 00:09:11.728 09:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66810 00:09:11.728 09:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:11.728 09:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:11.728 09:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66810 00:09:11.728 killing process with pid 66810 00:09:11.728 09:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:11.728 09:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:11.728 09:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66810' 00:09:11.728 09:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66810 00:09:11.728 [2024-12-06 09:46:36.767551] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:11.728 [2024-12-06 09:46:36.767645] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.728 [2024-12-06 09:46:36.767719] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:11.728 [2024-12-06 09:46:36.767732] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:11.728 09:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66810 00:09:11.994 [2024-12-06 09:46:37.078059] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:12.933 09:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:12.933 00:09:12.933 real 0m5.206s 00:09:12.933 user 0m7.494s 00:09:12.933 sys 0m0.847s 00:09:12.933 09:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.933 09:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.933 ************************************ 00:09:12.933 END TEST raid_superblock_test 00:09:12.933 ************************************ 00:09:13.192 09:46:38 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:13.192 09:46:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:13.192 09:46:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.192 09:46:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:13.193 ************************************ 00:09:13.193 START TEST raid_read_error_test 00:09:13.193 ************************************ 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.GVZShaaXoK 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67063 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67063 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67063 ']' 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:13.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:13.193 09:46:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.193 [2024-12-06 09:46:38.374596] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:09:13.193 [2024-12-06 09:46:38.374730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67063 ] 00:09:13.451 [2024-12-06 09:46:38.535305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.451 [2024-12-06 09:46:38.666769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.710 [2024-12-06 09:46:38.897810] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.710 [2024-12-06 09:46:38.897847] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.278 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:14.278 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:14.278 09:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:14.278 09:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:14.278 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.278 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.278 BaseBdev1_malloc 00:09:14.278 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.278 09:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:14.278 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.278 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.278 true 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.279 [2024-12-06 09:46:39.356661] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:14.279 [2024-12-06 09:46:39.356735] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.279 [2024-12-06 09:46:39.356762] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:14.279 [2024-12-06 09:46:39.356775] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.279 [2024-12-06 09:46:39.359336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.279 [2024-12-06 09:46:39.359384] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:14.279 BaseBdev1 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.279 BaseBdev2_malloc 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.279 true 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.279 [2024-12-06 09:46:39.429599] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:14.279 [2024-12-06 09:46:39.429674] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.279 [2024-12-06 09:46:39.429697] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:14.279 [2024-12-06 09:46:39.429710] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.279 [2024-12-06 09:46:39.432297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.279 [2024-12-06 09:46:39.432346] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:14.279 BaseBdev2 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.279 BaseBdev3_malloc 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.279 true 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.279 [2024-12-06 09:46:39.513535] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:14.279 [2024-12-06 09:46:39.513602] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.279 [2024-12-06 09:46:39.513625] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:14.279 [2024-12-06 09:46:39.513637] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.279 [2024-12-06 09:46:39.516169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.279 [2024-12-06 09:46:39.516216] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:14.279 BaseBdev3 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.279 [2024-12-06 09:46:39.525616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.279 [2024-12-06 09:46:39.527801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:14.279 [2024-12-06 09:46:39.527900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:14.279 [2024-12-06 09:46:39.528159] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:14.279 [2024-12-06 09:46:39.528184] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:14.279 [2024-12-06 09:46:39.528501] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:14.279 [2024-12-06 09:46:39.528700] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:14.279 [2024-12-06 09:46:39.528724] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:14.279 [2024-12-06 09:46:39.528948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.279 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.538 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.538 09:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.538 "name": "raid_bdev1", 00:09:14.538 "uuid": "2e0768f3-2705-4dad-b9c6-00e1f1597e7e", 00:09:14.538 "strip_size_kb": 64, 00:09:14.538 "state": "online", 00:09:14.538 "raid_level": "concat", 00:09:14.538 "superblock": true, 00:09:14.538 "num_base_bdevs": 3, 00:09:14.538 "num_base_bdevs_discovered": 3, 00:09:14.538 "num_base_bdevs_operational": 3, 00:09:14.538 "base_bdevs_list": [ 00:09:14.538 { 00:09:14.538 "name": "BaseBdev1", 00:09:14.538 "uuid": "000b44b5-ae39-5cd3-b00c-7d8cb0b0efd5", 00:09:14.538 "is_configured": true, 00:09:14.538 "data_offset": 2048, 00:09:14.538 "data_size": 63488 00:09:14.538 }, 00:09:14.538 { 00:09:14.538 "name": "BaseBdev2", 00:09:14.538 "uuid": "1abe7160-80ad-58a9-b063-c6ad923309bd", 00:09:14.538 "is_configured": true, 00:09:14.538 "data_offset": 2048, 00:09:14.538 "data_size": 63488 00:09:14.538 }, 00:09:14.538 { 00:09:14.538 "name": "BaseBdev3", 00:09:14.538 "uuid": "7d772897-adbf-5195-b7e9-731eb7fb9131", 00:09:14.538 "is_configured": true, 00:09:14.538 "data_offset": 2048, 00:09:14.538 "data_size": 63488 00:09:14.538 } 00:09:14.538 ] 00:09:14.538 }' 00:09:14.538 09:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.538 09:46:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.797 09:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:14.797 09:46:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:14.797 [2024-12-06 09:46:40.062254] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:15.736 09:46:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:15.736 09:46:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.736 09:46:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.736 09:46:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.736 09:46:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:15.736 09:46:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:15.736 09:46:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:15.736 09:46:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:15.736 09:46:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:15.736 09:46:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.736 09:46:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.736 09:46:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.736 09:46:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.736 09:46:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.736 09:46:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.736 09:46:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.736 09:46:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.736 09:46:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.736 09:46:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.736 09:46:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:15.736 09:46:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.736 09:46:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.995 09:46:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.995 "name": "raid_bdev1", 00:09:15.995 "uuid": "2e0768f3-2705-4dad-b9c6-00e1f1597e7e", 00:09:15.995 "strip_size_kb": 64, 00:09:15.995 "state": "online", 00:09:15.995 "raid_level": "concat", 00:09:15.995 "superblock": true, 00:09:15.995 "num_base_bdevs": 3, 00:09:15.995 "num_base_bdevs_discovered": 3, 00:09:15.995 "num_base_bdevs_operational": 3, 00:09:15.995 "base_bdevs_list": [ 00:09:15.995 { 00:09:15.995 "name": "BaseBdev1", 00:09:15.995 "uuid": "000b44b5-ae39-5cd3-b00c-7d8cb0b0efd5", 00:09:15.995 "is_configured": true, 00:09:15.995 "data_offset": 2048, 00:09:15.995 "data_size": 63488 00:09:15.995 }, 00:09:15.995 { 00:09:15.995 "name": "BaseBdev2", 00:09:15.995 "uuid": "1abe7160-80ad-58a9-b063-c6ad923309bd", 00:09:15.995 "is_configured": true, 00:09:15.995 "data_offset": 2048, 00:09:15.995 "data_size": 63488 00:09:15.995 }, 00:09:15.995 { 00:09:15.995 "name": "BaseBdev3", 00:09:15.995 "uuid": "7d772897-adbf-5195-b7e9-731eb7fb9131", 00:09:15.995 "is_configured": true, 00:09:15.995 "data_offset": 2048, 00:09:15.995 "data_size": 63488 00:09:15.995 } 00:09:15.995 ] 00:09:15.995 }' 00:09:15.995 09:46:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.995 09:46:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.255 09:46:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:16.255 09:46:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.255 09:46:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.255 [2024-12-06 09:46:41.401075] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:16.255 [2024-12-06 09:46:41.401114] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:16.255 [2024-12-06 09:46:41.404123] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.255 [2024-12-06 09:46:41.404197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:16.255 [2024-12-06 09:46:41.404238] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:16.255 [2024-12-06 09:46:41.404250] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:16.255 { 00:09:16.255 "results": [ 00:09:16.255 { 00:09:16.255 "job": "raid_bdev1", 00:09:16.255 "core_mask": "0x1", 00:09:16.255 "workload": "randrw", 00:09:16.255 "percentage": 50, 00:09:16.255 "status": "finished", 00:09:16.255 "queue_depth": 1, 00:09:16.255 "io_size": 131072, 00:09:16.255 "runtime": 1.339458, 00:09:16.255 "iops": 15229.294236922695, 00:09:16.255 "mibps": 1903.6617796153369, 00:09:16.255 "io_failed": 1, 00:09:16.255 "io_timeout": 0, 00:09:16.255 "avg_latency_us": 91.07384947341382, 00:09:16.255 "min_latency_us": 26.606113537117903, 00:09:16.255 "max_latency_us": 1652.709170305677 00:09:16.255 } 00:09:16.255 ], 00:09:16.255 "core_count": 1 00:09:16.255 } 00:09:16.255 09:46:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.255 09:46:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67063 00:09:16.255 09:46:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67063 ']' 00:09:16.255 09:46:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67063 00:09:16.255 09:46:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:16.255 09:46:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.255 09:46:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67063 00:09:16.255 09:46:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:16.255 09:46:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:16.255 killing process with pid 67063 00:09:16.255 09:46:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67063' 00:09:16.255 09:46:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67063 00:09:16.255 [2024-12-06 09:46:41.446055] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:16.255 09:46:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67063 00:09:16.514 [2024-12-06 09:46:41.678343] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:17.894 09:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.GVZShaaXoK 00:09:17.894 09:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:17.894 09:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:17.894 09:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:09:17.894 09:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:17.894 09:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:17.894 09:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:17.894 09:46:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:09:17.894 00:09:17.894 real 0m4.612s 00:09:17.894 user 0m5.504s 00:09:17.894 sys 0m0.583s 00:09:17.894 09:46:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.894 09:46:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.894 ************************************ 00:09:17.894 END TEST raid_read_error_test 00:09:17.894 ************************************ 00:09:17.894 09:46:42 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:17.894 09:46:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:17.894 09:46:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.894 09:46:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:17.894 ************************************ 00:09:17.894 START TEST raid_write_error_test 00:09:17.894 ************************************ 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.534yYaRjJl 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67203 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67203 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67203 ']' 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.894 09:46:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.894 [2024-12-06 09:46:43.050715] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:09:17.894 [2024-12-06 09:46:43.050840] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67203 ] 00:09:18.154 [2024-12-06 09:46:43.227086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.154 [2024-12-06 09:46:43.343954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.414 [2024-12-06 09:46:43.550103] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.414 [2024-12-06 09:46:43.550184] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.674 09:46:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.674 09:46:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:18.674 09:46:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:18.674 09:46:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:18.674 09:46:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.674 09:46:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.935 BaseBdev1_malloc 00:09:18.935 09:46:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.935 09:46:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:18.935 09:46:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.935 09:46:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.935 true 00:09:18.935 09:46:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.935 09:46:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:18.935 09:46:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.935 09:46:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.935 [2024-12-06 09:46:43.959578] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:18.935 [2024-12-06 09:46:43.959637] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.935 [2024-12-06 09:46:43.959659] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:18.935 [2024-12-06 09:46:43.959671] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.935 [2024-12-06 09:46:43.961970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.935 [2024-12-06 09:46:43.962012] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:18.935 BaseBdev1 00:09:18.935 09:46:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.935 09:46:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:18.935 09:46:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:18.935 09:46:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.935 09:46:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.935 BaseBdev2_malloc 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.935 true 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.935 [2024-12-06 09:46:44.025093] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:18.935 [2024-12-06 09:46:44.025180] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.935 [2024-12-06 09:46:44.025211] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:18.935 [2024-12-06 09:46:44.025231] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.935 [2024-12-06 09:46:44.028258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.935 [2024-12-06 09:46:44.028313] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:18.935 BaseBdev2 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.935 BaseBdev3_malloc 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.935 true 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.935 [2024-12-06 09:46:44.102992] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:18.935 [2024-12-06 09:46:44.103051] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.935 [2024-12-06 09:46:44.103088] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:18.935 [2024-12-06 09:46:44.103104] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.935 [2024-12-06 09:46:44.105441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.935 [2024-12-06 09:46:44.105484] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:18.935 BaseBdev3 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.935 [2024-12-06 09:46:44.111067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:18.935 [2024-12-06 09:46:44.113069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:18.935 [2024-12-06 09:46:44.113170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:18.935 [2024-12-06 09:46:44.113408] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:18.935 [2024-12-06 09:46:44.113430] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:18.935 [2024-12-06 09:46:44.113713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:18.935 [2024-12-06 09:46:44.113904] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:18.935 [2024-12-06 09:46:44.113924] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:18.935 [2024-12-06 09:46:44.114090] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.935 "name": "raid_bdev1", 00:09:18.935 "uuid": "af9a379b-4c5c-4783-a3b5-a07602b79a0c", 00:09:18.935 "strip_size_kb": 64, 00:09:18.935 "state": "online", 00:09:18.935 "raid_level": "concat", 00:09:18.935 "superblock": true, 00:09:18.935 "num_base_bdevs": 3, 00:09:18.935 "num_base_bdevs_discovered": 3, 00:09:18.935 "num_base_bdevs_operational": 3, 00:09:18.935 "base_bdevs_list": [ 00:09:18.935 { 00:09:18.935 "name": "BaseBdev1", 00:09:18.935 "uuid": "3f56fa41-c7cc-50aa-8280-6b8936a00f24", 00:09:18.935 "is_configured": true, 00:09:18.935 "data_offset": 2048, 00:09:18.935 "data_size": 63488 00:09:18.935 }, 00:09:18.935 { 00:09:18.935 "name": "BaseBdev2", 00:09:18.935 "uuid": "323acac7-3240-5de0-8ecd-442cb1dbcf32", 00:09:18.935 "is_configured": true, 00:09:18.935 "data_offset": 2048, 00:09:18.935 "data_size": 63488 00:09:18.935 }, 00:09:18.935 { 00:09:18.935 "name": "BaseBdev3", 00:09:18.935 "uuid": "84f38c87-cd68-53d7-9c4e-52fe5a10ebb4", 00:09:18.935 "is_configured": true, 00:09:18.935 "data_offset": 2048, 00:09:18.935 "data_size": 63488 00:09:18.935 } 00:09:18.935 ] 00:09:18.935 }' 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.935 09:46:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.194 09:46:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:19.194 09:46:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:19.454 [2024-12-06 09:46:44.540343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:20.391 09:46:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:20.391 09:46:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.391 09:46:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.391 09:46:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.391 09:46:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:20.391 09:46:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:20.391 09:46:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:20.391 09:46:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:20.391 09:46:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:20.391 09:46:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.391 09:46:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:20.391 09:46:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.391 09:46:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.391 09:46:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.391 09:46:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.391 09:46:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.391 09:46:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.391 09:46:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.391 09:46:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.391 09:46:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.391 09:46:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.391 09:46:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.391 09:46:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.391 "name": "raid_bdev1", 00:09:20.391 "uuid": "af9a379b-4c5c-4783-a3b5-a07602b79a0c", 00:09:20.391 "strip_size_kb": 64, 00:09:20.391 "state": "online", 00:09:20.391 "raid_level": "concat", 00:09:20.391 "superblock": true, 00:09:20.391 "num_base_bdevs": 3, 00:09:20.391 "num_base_bdevs_discovered": 3, 00:09:20.391 "num_base_bdevs_operational": 3, 00:09:20.391 "base_bdevs_list": [ 00:09:20.391 { 00:09:20.391 "name": "BaseBdev1", 00:09:20.391 "uuid": "3f56fa41-c7cc-50aa-8280-6b8936a00f24", 00:09:20.391 "is_configured": true, 00:09:20.391 "data_offset": 2048, 00:09:20.391 "data_size": 63488 00:09:20.391 }, 00:09:20.391 { 00:09:20.391 "name": "BaseBdev2", 00:09:20.391 "uuid": "323acac7-3240-5de0-8ecd-442cb1dbcf32", 00:09:20.391 "is_configured": true, 00:09:20.391 "data_offset": 2048, 00:09:20.391 "data_size": 63488 00:09:20.391 }, 00:09:20.391 { 00:09:20.391 "name": "BaseBdev3", 00:09:20.391 "uuid": "84f38c87-cd68-53d7-9c4e-52fe5a10ebb4", 00:09:20.391 "is_configured": true, 00:09:20.391 "data_offset": 2048, 00:09:20.391 "data_size": 63488 00:09:20.391 } 00:09:20.391 ] 00:09:20.391 }' 00:09:20.391 09:46:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.391 09:46:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.651 09:46:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:20.651 09:46:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.651 09:46:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.651 [2024-12-06 09:46:45.855894] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:20.651 [2024-12-06 09:46:45.855936] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:20.651 [2024-12-06 09:46:45.859893] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.651 [2024-12-06 09:46:45.860016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.651 [2024-12-06 09:46:45.860072] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:20.651 [2024-12-06 09:46:45.860095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:20.651 { 00:09:20.651 "results": [ 00:09:20.651 { 00:09:20.651 "job": "raid_bdev1", 00:09:20.651 "core_mask": "0x1", 00:09:20.651 "workload": "randrw", 00:09:20.651 "percentage": 50, 00:09:20.651 "status": "finished", 00:09:20.651 "queue_depth": 1, 00:09:20.651 "io_size": 131072, 00:09:20.651 "runtime": 1.315727, 00:09:20.651 "iops": 10733.989649828574, 00:09:20.651 "mibps": 1341.7487062285718, 00:09:20.651 "io_failed": 1, 00:09:20.651 "io_timeout": 0, 00:09:20.651 "avg_latency_us": 128.00675143056077, 00:09:20.651 "min_latency_us": 35.99650655021834, 00:09:20.651 "max_latency_us": 1874.5013100436681 00:09:20.651 } 00:09:20.651 ], 00:09:20.651 "core_count": 1 00:09:20.651 } 00:09:20.651 09:46:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.651 09:46:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67203 00:09:20.651 09:46:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67203 ']' 00:09:20.651 09:46:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67203 00:09:20.651 09:46:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:20.651 09:46:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:20.651 09:46:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67203 00:09:20.651 09:46:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:20.651 09:46:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:20.651 09:46:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67203' 00:09:20.651 killing process with pid 67203 00:09:20.651 09:46:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67203 00:09:20.651 [2024-12-06 09:46:45.889685] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:20.651 09:46:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67203 00:09:20.911 [2024-12-06 09:46:46.176973] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:22.292 09:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.534yYaRjJl 00:09:22.292 09:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:22.292 09:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:22.292 09:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:09:22.292 09:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:22.292 09:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:22.292 09:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:22.292 09:46:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:09:22.292 00:09:22.292 real 0m4.456s 00:09:22.292 user 0m5.115s 00:09:22.292 sys 0m0.519s 00:09:22.292 09:46:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.292 09:46:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.292 ************************************ 00:09:22.292 END TEST raid_write_error_test 00:09:22.292 ************************************ 00:09:22.292 09:46:47 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:22.292 09:46:47 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:22.292 09:46:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:22.292 09:46:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.292 09:46:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:22.292 ************************************ 00:09:22.292 START TEST raid_state_function_test 00:09:22.292 ************************************ 00:09:22.292 09:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:09:22.292 09:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:22.292 09:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:22.292 09:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:22.292 09:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:22.292 09:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:22.292 09:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:22.292 09:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:22.292 09:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:22.292 09:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:22.292 09:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:22.292 09:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:22.292 09:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:22.292 09:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:22.292 09:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:22.292 09:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:22.292 09:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:22.292 09:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:22.292 09:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:22.292 09:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:22.292 09:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:22.292 09:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:22.292 09:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:22.292 09:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:22.292 09:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:22.292 09:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:22.292 09:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:22.292 09:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67347 00:09:22.292 09:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67347' 00:09:22.292 Process raid pid: 67347 00:09:22.292 09:46:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67347 00:09:22.292 09:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67347 ']' 00:09:22.292 09:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.292 09:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:22.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.292 09:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.293 09:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:22.293 09:46:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.293 [2024-12-06 09:46:47.533093] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:09:22.293 [2024-12-06 09:46:47.533252] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.553 [2024-12-06 09:46:47.713125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.812 [2024-12-06 09:46:47.828768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.812 [2024-12-06 09:46:48.029376] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.812 [2024-12-06 09:46:48.029418] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.380 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.380 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:23.380 09:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:23.380 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.380 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.380 [2024-12-06 09:46:48.415971] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:23.380 [2024-12-06 09:46:48.416036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:23.380 [2024-12-06 09:46:48.416047] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:23.380 [2024-12-06 09:46:48.416074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:23.380 [2024-12-06 09:46:48.416081] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:23.380 [2024-12-06 09:46:48.416091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:23.380 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.380 09:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:23.380 09:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.380 09:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.380 09:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.380 09:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.380 09:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.380 09:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.380 09:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.380 09:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.380 09:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.380 09:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.380 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.380 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.380 09:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.380 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.380 09:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.380 "name": "Existed_Raid", 00:09:23.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.380 "strip_size_kb": 0, 00:09:23.380 "state": "configuring", 00:09:23.380 "raid_level": "raid1", 00:09:23.380 "superblock": false, 00:09:23.380 "num_base_bdevs": 3, 00:09:23.380 "num_base_bdevs_discovered": 0, 00:09:23.380 "num_base_bdevs_operational": 3, 00:09:23.380 "base_bdevs_list": [ 00:09:23.380 { 00:09:23.380 "name": "BaseBdev1", 00:09:23.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.380 "is_configured": false, 00:09:23.380 "data_offset": 0, 00:09:23.380 "data_size": 0 00:09:23.380 }, 00:09:23.380 { 00:09:23.380 "name": "BaseBdev2", 00:09:23.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.380 "is_configured": false, 00:09:23.380 "data_offset": 0, 00:09:23.380 "data_size": 0 00:09:23.380 }, 00:09:23.380 { 00:09:23.380 "name": "BaseBdev3", 00:09:23.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.380 "is_configured": false, 00:09:23.380 "data_offset": 0, 00:09:23.380 "data_size": 0 00:09:23.380 } 00:09:23.380 ] 00:09:23.380 }' 00:09:23.380 09:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.380 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.638 09:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:23.638 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.638 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.638 [2024-12-06 09:46:48.823299] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:23.638 [2024-12-06 09:46:48.823336] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:23.638 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.638 09:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:23.638 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.638 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.638 [2024-12-06 09:46:48.835248] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:23.638 [2024-12-06 09:46:48.835294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:23.638 [2024-12-06 09:46:48.835303] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:23.638 [2024-12-06 09:46:48.835312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:23.638 [2024-12-06 09:46:48.835334] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:23.638 [2024-12-06 09:46:48.835343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:23.638 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.638 09:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:23.638 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.638 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.638 [2024-12-06 09:46:48.884854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:23.638 BaseBdev1 00:09:23.638 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.638 09:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:23.638 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:23.638 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:23.638 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:23.638 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:23.638 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:23.638 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:23.638 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.638 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.638 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.638 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:23.638 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.638 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.638 [ 00:09:23.638 { 00:09:23.638 "name": "BaseBdev1", 00:09:23.638 "aliases": [ 00:09:23.897 "64559772-f84d-4973-8727-98e597cab119" 00:09:23.897 ], 00:09:23.897 "product_name": "Malloc disk", 00:09:23.897 "block_size": 512, 00:09:23.897 "num_blocks": 65536, 00:09:23.897 "uuid": "64559772-f84d-4973-8727-98e597cab119", 00:09:23.897 "assigned_rate_limits": { 00:09:23.897 "rw_ios_per_sec": 0, 00:09:23.897 "rw_mbytes_per_sec": 0, 00:09:23.897 "r_mbytes_per_sec": 0, 00:09:23.897 "w_mbytes_per_sec": 0 00:09:23.897 }, 00:09:23.897 "claimed": true, 00:09:23.897 "claim_type": "exclusive_write", 00:09:23.897 "zoned": false, 00:09:23.897 "supported_io_types": { 00:09:23.897 "read": true, 00:09:23.897 "write": true, 00:09:23.897 "unmap": true, 00:09:23.897 "flush": true, 00:09:23.897 "reset": true, 00:09:23.897 "nvme_admin": false, 00:09:23.897 "nvme_io": false, 00:09:23.897 "nvme_io_md": false, 00:09:23.897 "write_zeroes": true, 00:09:23.897 "zcopy": true, 00:09:23.897 "get_zone_info": false, 00:09:23.897 "zone_management": false, 00:09:23.897 "zone_append": false, 00:09:23.897 "compare": false, 00:09:23.897 "compare_and_write": false, 00:09:23.897 "abort": true, 00:09:23.897 "seek_hole": false, 00:09:23.897 "seek_data": false, 00:09:23.897 "copy": true, 00:09:23.897 "nvme_iov_md": false 00:09:23.897 }, 00:09:23.897 "memory_domains": [ 00:09:23.897 { 00:09:23.897 "dma_device_id": "system", 00:09:23.897 "dma_device_type": 1 00:09:23.897 }, 00:09:23.897 { 00:09:23.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.897 "dma_device_type": 2 00:09:23.897 } 00:09:23.897 ], 00:09:23.897 "driver_specific": {} 00:09:23.897 } 00:09:23.897 ] 00:09:23.897 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.897 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:23.897 09:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:23.897 09:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.897 09:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.897 09:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.897 09:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.897 09:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.897 09:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.897 09:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.897 09:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.897 09:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.897 09:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.897 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.897 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.897 09:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.897 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.897 09:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.897 "name": "Existed_Raid", 00:09:23.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.897 "strip_size_kb": 0, 00:09:23.897 "state": "configuring", 00:09:23.897 "raid_level": "raid1", 00:09:23.897 "superblock": false, 00:09:23.897 "num_base_bdevs": 3, 00:09:23.897 "num_base_bdevs_discovered": 1, 00:09:23.897 "num_base_bdevs_operational": 3, 00:09:23.897 "base_bdevs_list": [ 00:09:23.897 { 00:09:23.897 "name": "BaseBdev1", 00:09:23.897 "uuid": "64559772-f84d-4973-8727-98e597cab119", 00:09:23.897 "is_configured": true, 00:09:23.897 "data_offset": 0, 00:09:23.897 "data_size": 65536 00:09:23.897 }, 00:09:23.897 { 00:09:23.897 "name": "BaseBdev2", 00:09:23.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.897 "is_configured": false, 00:09:23.897 "data_offset": 0, 00:09:23.897 "data_size": 0 00:09:23.897 }, 00:09:23.897 { 00:09:23.897 "name": "BaseBdev3", 00:09:23.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.897 "is_configured": false, 00:09:23.897 "data_offset": 0, 00:09:23.897 "data_size": 0 00:09:23.897 } 00:09:23.897 ] 00:09:23.897 }' 00:09:23.897 09:46:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.897 09:46:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.157 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:24.157 09:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.157 09:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.157 [2024-12-06 09:46:49.344129] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:24.157 [2024-12-06 09:46:49.344202] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:24.157 09:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.157 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:24.157 09:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.157 09:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.157 [2024-12-06 09:46:49.356136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:24.157 [2024-12-06 09:46:49.358140] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:24.157 [2024-12-06 09:46:49.358192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:24.157 [2024-12-06 09:46:49.358203] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:24.157 [2024-12-06 09:46:49.358212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:24.157 09:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.157 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:24.157 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:24.157 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:24.157 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.157 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.157 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.157 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.157 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.157 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.157 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.157 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.157 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.157 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.157 09:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.157 09:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.157 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.157 09:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.157 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.157 "name": "Existed_Raid", 00:09:24.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.157 "strip_size_kb": 0, 00:09:24.157 "state": "configuring", 00:09:24.157 "raid_level": "raid1", 00:09:24.157 "superblock": false, 00:09:24.157 "num_base_bdevs": 3, 00:09:24.157 "num_base_bdevs_discovered": 1, 00:09:24.157 "num_base_bdevs_operational": 3, 00:09:24.157 "base_bdevs_list": [ 00:09:24.157 { 00:09:24.157 "name": "BaseBdev1", 00:09:24.157 "uuid": "64559772-f84d-4973-8727-98e597cab119", 00:09:24.157 "is_configured": true, 00:09:24.157 "data_offset": 0, 00:09:24.157 "data_size": 65536 00:09:24.157 }, 00:09:24.157 { 00:09:24.157 "name": "BaseBdev2", 00:09:24.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.157 "is_configured": false, 00:09:24.157 "data_offset": 0, 00:09:24.157 "data_size": 0 00:09:24.157 }, 00:09:24.157 { 00:09:24.157 "name": "BaseBdev3", 00:09:24.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.157 "is_configured": false, 00:09:24.157 "data_offset": 0, 00:09:24.157 "data_size": 0 00:09:24.157 } 00:09:24.157 ] 00:09:24.157 }' 00:09:24.157 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.157 09:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.723 [2024-12-06 09:46:49.836485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:24.723 BaseBdev2 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.723 [ 00:09:24.723 { 00:09:24.723 "name": "BaseBdev2", 00:09:24.723 "aliases": [ 00:09:24.723 "d5e4e105-b18c-4eac-81f3-998b1a531b5e" 00:09:24.723 ], 00:09:24.723 "product_name": "Malloc disk", 00:09:24.723 "block_size": 512, 00:09:24.723 "num_blocks": 65536, 00:09:24.723 "uuid": "d5e4e105-b18c-4eac-81f3-998b1a531b5e", 00:09:24.723 "assigned_rate_limits": { 00:09:24.723 "rw_ios_per_sec": 0, 00:09:24.723 "rw_mbytes_per_sec": 0, 00:09:24.723 "r_mbytes_per_sec": 0, 00:09:24.723 "w_mbytes_per_sec": 0 00:09:24.723 }, 00:09:24.723 "claimed": true, 00:09:24.723 "claim_type": "exclusive_write", 00:09:24.723 "zoned": false, 00:09:24.723 "supported_io_types": { 00:09:24.723 "read": true, 00:09:24.723 "write": true, 00:09:24.723 "unmap": true, 00:09:24.723 "flush": true, 00:09:24.723 "reset": true, 00:09:24.723 "nvme_admin": false, 00:09:24.723 "nvme_io": false, 00:09:24.723 "nvme_io_md": false, 00:09:24.723 "write_zeroes": true, 00:09:24.723 "zcopy": true, 00:09:24.723 "get_zone_info": false, 00:09:24.723 "zone_management": false, 00:09:24.723 "zone_append": false, 00:09:24.723 "compare": false, 00:09:24.723 "compare_and_write": false, 00:09:24.723 "abort": true, 00:09:24.723 "seek_hole": false, 00:09:24.723 "seek_data": false, 00:09:24.723 "copy": true, 00:09:24.723 "nvme_iov_md": false 00:09:24.723 }, 00:09:24.723 "memory_domains": [ 00:09:24.723 { 00:09:24.723 "dma_device_id": "system", 00:09:24.723 "dma_device_type": 1 00:09:24.723 }, 00:09:24.723 { 00:09:24.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.723 "dma_device_type": 2 00:09:24.723 } 00:09:24.723 ], 00:09:24.723 "driver_specific": {} 00:09:24.723 } 00:09:24.723 ] 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.723 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.723 "name": "Existed_Raid", 00:09:24.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.723 "strip_size_kb": 0, 00:09:24.723 "state": "configuring", 00:09:24.723 "raid_level": "raid1", 00:09:24.723 "superblock": false, 00:09:24.723 "num_base_bdevs": 3, 00:09:24.723 "num_base_bdevs_discovered": 2, 00:09:24.723 "num_base_bdevs_operational": 3, 00:09:24.723 "base_bdevs_list": [ 00:09:24.723 { 00:09:24.723 "name": "BaseBdev1", 00:09:24.723 "uuid": "64559772-f84d-4973-8727-98e597cab119", 00:09:24.723 "is_configured": true, 00:09:24.723 "data_offset": 0, 00:09:24.723 "data_size": 65536 00:09:24.723 }, 00:09:24.723 { 00:09:24.723 "name": "BaseBdev2", 00:09:24.723 "uuid": "d5e4e105-b18c-4eac-81f3-998b1a531b5e", 00:09:24.723 "is_configured": true, 00:09:24.723 "data_offset": 0, 00:09:24.723 "data_size": 65536 00:09:24.723 }, 00:09:24.723 { 00:09:24.723 "name": "BaseBdev3", 00:09:24.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.724 "is_configured": false, 00:09:24.724 "data_offset": 0, 00:09:24.724 "data_size": 0 00:09:24.724 } 00:09:24.724 ] 00:09:24.724 }' 00:09:24.724 09:46:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.724 09:46:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.285 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:25.285 09:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.286 09:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.286 [2024-12-06 09:46:50.331894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:25.286 [2024-12-06 09:46:50.331950] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:25.286 [2024-12-06 09:46:50.331964] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:25.286 [2024-12-06 09:46:50.332245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:25.286 [2024-12-06 09:46:50.332443] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:25.286 [2024-12-06 09:46:50.332459] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:25.286 [2024-12-06 09:46:50.332719] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:25.286 BaseBdev3 00:09:25.286 09:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.286 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:25.286 09:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:25.286 09:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:25.286 09:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:25.286 09:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:25.286 09:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:25.286 09:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:25.286 09:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.286 09:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.286 09:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.286 09:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:25.286 09:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.286 09:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.286 [ 00:09:25.286 { 00:09:25.286 "name": "BaseBdev3", 00:09:25.286 "aliases": [ 00:09:25.286 "05430879-7db3-4d47-9621-cf8a93f9cc8a" 00:09:25.286 ], 00:09:25.286 "product_name": "Malloc disk", 00:09:25.286 "block_size": 512, 00:09:25.286 "num_blocks": 65536, 00:09:25.286 "uuid": "05430879-7db3-4d47-9621-cf8a93f9cc8a", 00:09:25.286 "assigned_rate_limits": { 00:09:25.286 "rw_ios_per_sec": 0, 00:09:25.286 "rw_mbytes_per_sec": 0, 00:09:25.286 "r_mbytes_per_sec": 0, 00:09:25.286 "w_mbytes_per_sec": 0 00:09:25.286 }, 00:09:25.286 "claimed": true, 00:09:25.286 "claim_type": "exclusive_write", 00:09:25.286 "zoned": false, 00:09:25.286 "supported_io_types": { 00:09:25.286 "read": true, 00:09:25.286 "write": true, 00:09:25.286 "unmap": true, 00:09:25.286 "flush": true, 00:09:25.286 "reset": true, 00:09:25.286 "nvme_admin": false, 00:09:25.286 "nvme_io": false, 00:09:25.286 "nvme_io_md": false, 00:09:25.286 "write_zeroes": true, 00:09:25.286 "zcopy": true, 00:09:25.286 "get_zone_info": false, 00:09:25.286 "zone_management": false, 00:09:25.286 "zone_append": false, 00:09:25.286 "compare": false, 00:09:25.286 "compare_and_write": false, 00:09:25.286 "abort": true, 00:09:25.286 "seek_hole": false, 00:09:25.286 "seek_data": false, 00:09:25.286 "copy": true, 00:09:25.286 "nvme_iov_md": false 00:09:25.286 }, 00:09:25.286 "memory_domains": [ 00:09:25.286 { 00:09:25.286 "dma_device_id": "system", 00:09:25.286 "dma_device_type": 1 00:09:25.286 }, 00:09:25.286 { 00:09:25.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.286 "dma_device_type": 2 00:09:25.286 } 00:09:25.286 ], 00:09:25.286 "driver_specific": {} 00:09:25.286 } 00:09:25.286 ] 00:09:25.286 09:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.286 09:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:25.286 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:25.286 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:25.286 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:25.286 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.286 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:25.286 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.286 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.286 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.286 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.286 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.286 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.286 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.286 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.286 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.286 09:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.286 09:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.286 09:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.286 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.286 "name": "Existed_Raid", 00:09:25.286 "uuid": "044f6b0b-3b6c-4697-b08c-82fc94efeb40", 00:09:25.286 "strip_size_kb": 0, 00:09:25.286 "state": "online", 00:09:25.286 "raid_level": "raid1", 00:09:25.286 "superblock": false, 00:09:25.286 "num_base_bdevs": 3, 00:09:25.286 "num_base_bdevs_discovered": 3, 00:09:25.286 "num_base_bdevs_operational": 3, 00:09:25.286 "base_bdevs_list": [ 00:09:25.286 { 00:09:25.286 "name": "BaseBdev1", 00:09:25.286 "uuid": "64559772-f84d-4973-8727-98e597cab119", 00:09:25.286 "is_configured": true, 00:09:25.286 "data_offset": 0, 00:09:25.286 "data_size": 65536 00:09:25.286 }, 00:09:25.286 { 00:09:25.286 "name": "BaseBdev2", 00:09:25.286 "uuid": "d5e4e105-b18c-4eac-81f3-998b1a531b5e", 00:09:25.286 "is_configured": true, 00:09:25.287 "data_offset": 0, 00:09:25.287 "data_size": 65536 00:09:25.287 }, 00:09:25.287 { 00:09:25.287 "name": "BaseBdev3", 00:09:25.287 "uuid": "05430879-7db3-4d47-9621-cf8a93f9cc8a", 00:09:25.287 "is_configured": true, 00:09:25.287 "data_offset": 0, 00:09:25.287 "data_size": 65536 00:09:25.287 } 00:09:25.287 ] 00:09:25.287 }' 00:09:25.287 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.287 09:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.851 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:25.851 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:25.851 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:25.851 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:25.851 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:25.851 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:25.851 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:25.851 09:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.851 09:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.851 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:25.851 [2024-12-06 09:46:50.839461] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.851 09:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.851 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:25.851 "name": "Existed_Raid", 00:09:25.851 "aliases": [ 00:09:25.851 "044f6b0b-3b6c-4697-b08c-82fc94efeb40" 00:09:25.851 ], 00:09:25.851 "product_name": "Raid Volume", 00:09:25.851 "block_size": 512, 00:09:25.851 "num_blocks": 65536, 00:09:25.851 "uuid": "044f6b0b-3b6c-4697-b08c-82fc94efeb40", 00:09:25.851 "assigned_rate_limits": { 00:09:25.851 "rw_ios_per_sec": 0, 00:09:25.851 "rw_mbytes_per_sec": 0, 00:09:25.851 "r_mbytes_per_sec": 0, 00:09:25.851 "w_mbytes_per_sec": 0 00:09:25.851 }, 00:09:25.851 "claimed": false, 00:09:25.851 "zoned": false, 00:09:25.851 "supported_io_types": { 00:09:25.851 "read": true, 00:09:25.851 "write": true, 00:09:25.851 "unmap": false, 00:09:25.851 "flush": false, 00:09:25.851 "reset": true, 00:09:25.851 "nvme_admin": false, 00:09:25.851 "nvme_io": false, 00:09:25.851 "nvme_io_md": false, 00:09:25.851 "write_zeroes": true, 00:09:25.851 "zcopy": false, 00:09:25.851 "get_zone_info": false, 00:09:25.851 "zone_management": false, 00:09:25.851 "zone_append": false, 00:09:25.851 "compare": false, 00:09:25.851 "compare_and_write": false, 00:09:25.851 "abort": false, 00:09:25.851 "seek_hole": false, 00:09:25.851 "seek_data": false, 00:09:25.851 "copy": false, 00:09:25.851 "nvme_iov_md": false 00:09:25.851 }, 00:09:25.851 "memory_domains": [ 00:09:25.851 { 00:09:25.851 "dma_device_id": "system", 00:09:25.851 "dma_device_type": 1 00:09:25.851 }, 00:09:25.851 { 00:09:25.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.851 "dma_device_type": 2 00:09:25.851 }, 00:09:25.851 { 00:09:25.851 "dma_device_id": "system", 00:09:25.851 "dma_device_type": 1 00:09:25.851 }, 00:09:25.851 { 00:09:25.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.851 "dma_device_type": 2 00:09:25.851 }, 00:09:25.851 { 00:09:25.851 "dma_device_id": "system", 00:09:25.851 "dma_device_type": 1 00:09:25.851 }, 00:09:25.851 { 00:09:25.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.851 "dma_device_type": 2 00:09:25.851 } 00:09:25.851 ], 00:09:25.851 "driver_specific": { 00:09:25.851 "raid": { 00:09:25.851 "uuid": "044f6b0b-3b6c-4697-b08c-82fc94efeb40", 00:09:25.851 "strip_size_kb": 0, 00:09:25.851 "state": "online", 00:09:25.851 "raid_level": "raid1", 00:09:25.851 "superblock": false, 00:09:25.851 "num_base_bdevs": 3, 00:09:25.851 "num_base_bdevs_discovered": 3, 00:09:25.851 "num_base_bdevs_operational": 3, 00:09:25.851 "base_bdevs_list": [ 00:09:25.851 { 00:09:25.851 "name": "BaseBdev1", 00:09:25.851 "uuid": "64559772-f84d-4973-8727-98e597cab119", 00:09:25.851 "is_configured": true, 00:09:25.851 "data_offset": 0, 00:09:25.851 "data_size": 65536 00:09:25.851 }, 00:09:25.851 { 00:09:25.851 "name": "BaseBdev2", 00:09:25.851 "uuid": "d5e4e105-b18c-4eac-81f3-998b1a531b5e", 00:09:25.851 "is_configured": true, 00:09:25.851 "data_offset": 0, 00:09:25.851 "data_size": 65536 00:09:25.851 }, 00:09:25.851 { 00:09:25.851 "name": "BaseBdev3", 00:09:25.851 "uuid": "05430879-7db3-4d47-9621-cf8a93f9cc8a", 00:09:25.851 "is_configured": true, 00:09:25.851 "data_offset": 0, 00:09:25.851 "data_size": 65536 00:09:25.851 } 00:09:25.851 ] 00:09:25.851 } 00:09:25.851 } 00:09:25.851 }' 00:09:25.851 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:25.851 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:25.851 BaseBdev2 00:09:25.851 BaseBdev3' 00:09:25.851 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.851 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:25.851 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.851 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:25.851 09:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.851 09:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.851 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.851 09:46:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.851 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.851 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.851 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.851 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.851 09:46:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:25.851 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.851 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.851 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.851 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.851 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.851 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.851 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.851 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:25.851 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.851 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.851 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.851 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.851 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.851 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:25.851 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.851 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.851 [2024-12-06 09:46:51.082758] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:26.110 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.110 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:26.110 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:26.110 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:26.110 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:26.110 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:26.110 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:26.110 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.110 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.110 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.110 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.110 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:26.110 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.110 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.110 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.110 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.110 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.110 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.110 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.110 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.110 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.110 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.110 "name": "Existed_Raid", 00:09:26.110 "uuid": "044f6b0b-3b6c-4697-b08c-82fc94efeb40", 00:09:26.110 "strip_size_kb": 0, 00:09:26.110 "state": "online", 00:09:26.110 "raid_level": "raid1", 00:09:26.110 "superblock": false, 00:09:26.110 "num_base_bdevs": 3, 00:09:26.110 "num_base_bdevs_discovered": 2, 00:09:26.110 "num_base_bdevs_operational": 2, 00:09:26.110 "base_bdevs_list": [ 00:09:26.110 { 00:09:26.110 "name": null, 00:09:26.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.110 "is_configured": false, 00:09:26.110 "data_offset": 0, 00:09:26.110 "data_size": 65536 00:09:26.110 }, 00:09:26.110 { 00:09:26.110 "name": "BaseBdev2", 00:09:26.110 "uuid": "d5e4e105-b18c-4eac-81f3-998b1a531b5e", 00:09:26.110 "is_configured": true, 00:09:26.110 "data_offset": 0, 00:09:26.110 "data_size": 65536 00:09:26.110 }, 00:09:26.110 { 00:09:26.110 "name": "BaseBdev3", 00:09:26.110 "uuid": "05430879-7db3-4d47-9621-cf8a93f9cc8a", 00:09:26.110 "is_configured": true, 00:09:26.110 "data_offset": 0, 00:09:26.110 "data_size": 65536 00:09:26.110 } 00:09:26.110 ] 00:09:26.110 }' 00:09:26.110 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.110 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.369 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:26.369 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:26.369 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.369 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.369 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:26.369 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.369 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.369 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:26.369 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:26.369 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:26.369 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.369 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.369 [2024-12-06 09:46:51.639404] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:26.627 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.627 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:26.627 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:26.627 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:26.627 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.627 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.627 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.627 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.627 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:26.627 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:26.627 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:26.628 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.628 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.628 [2024-12-06 09:46:51.783901] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:26.628 [2024-12-06 09:46:51.784003] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:26.628 [2024-12-06 09:46:51.882221] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:26.628 [2024-12-06 09:46:51.882277] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:26.628 [2024-12-06 09:46:51.882289] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:26.628 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.628 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:26.628 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:26.628 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.628 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:26.628 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.628 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.628 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.887 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:26.887 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:26.887 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:26.887 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:26.887 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:26.887 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:26.887 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.887 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.887 BaseBdev2 00:09:26.887 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.887 09:46:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:26.887 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:26.887 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:26.887 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:26.887 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:26.887 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:26.887 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:26.887 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.887 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.887 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.887 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:26.887 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.887 09:46:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.887 [ 00:09:26.887 { 00:09:26.887 "name": "BaseBdev2", 00:09:26.887 "aliases": [ 00:09:26.887 "89842a09-b97a-4303-b516-d8e37224d163" 00:09:26.887 ], 00:09:26.887 "product_name": "Malloc disk", 00:09:26.887 "block_size": 512, 00:09:26.887 "num_blocks": 65536, 00:09:26.887 "uuid": "89842a09-b97a-4303-b516-d8e37224d163", 00:09:26.887 "assigned_rate_limits": { 00:09:26.887 "rw_ios_per_sec": 0, 00:09:26.887 "rw_mbytes_per_sec": 0, 00:09:26.887 "r_mbytes_per_sec": 0, 00:09:26.887 "w_mbytes_per_sec": 0 00:09:26.887 }, 00:09:26.887 "claimed": false, 00:09:26.887 "zoned": false, 00:09:26.887 "supported_io_types": { 00:09:26.887 "read": true, 00:09:26.887 "write": true, 00:09:26.887 "unmap": true, 00:09:26.887 "flush": true, 00:09:26.887 "reset": true, 00:09:26.887 "nvme_admin": false, 00:09:26.887 "nvme_io": false, 00:09:26.887 "nvme_io_md": false, 00:09:26.887 "write_zeroes": true, 00:09:26.887 "zcopy": true, 00:09:26.887 "get_zone_info": false, 00:09:26.887 "zone_management": false, 00:09:26.887 "zone_append": false, 00:09:26.887 "compare": false, 00:09:26.887 "compare_and_write": false, 00:09:26.887 "abort": true, 00:09:26.887 "seek_hole": false, 00:09:26.887 "seek_data": false, 00:09:26.887 "copy": true, 00:09:26.887 "nvme_iov_md": false 00:09:26.887 }, 00:09:26.887 "memory_domains": [ 00:09:26.887 { 00:09:26.887 "dma_device_id": "system", 00:09:26.887 "dma_device_type": 1 00:09:26.887 }, 00:09:26.887 { 00:09:26.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.887 "dma_device_type": 2 00:09:26.887 } 00:09:26.887 ], 00:09:26.887 "driver_specific": {} 00:09:26.887 } 00:09:26.887 ] 00:09:26.887 09:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.887 09:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:26.887 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:26.887 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:26.887 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:26.887 09:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.887 09:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.887 BaseBdev3 00:09:26.887 09:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.887 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:26.887 09:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:26.887 09:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:26.887 09:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:26.887 09:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:26.887 09:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:26.887 09:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:26.887 09:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.887 09:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.887 09:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.887 09:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:26.887 09:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.887 09:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.887 [ 00:09:26.887 { 00:09:26.887 "name": "BaseBdev3", 00:09:26.887 "aliases": [ 00:09:26.887 "7965755c-3654-4ef0-a7c8-33ba4c315503" 00:09:26.887 ], 00:09:26.887 "product_name": "Malloc disk", 00:09:26.887 "block_size": 512, 00:09:26.887 "num_blocks": 65536, 00:09:26.887 "uuid": "7965755c-3654-4ef0-a7c8-33ba4c315503", 00:09:26.887 "assigned_rate_limits": { 00:09:26.887 "rw_ios_per_sec": 0, 00:09:26.887 "rw_mbytes_per_sec": 0, 00:09:26.887 "r_mbytes_per_sec": 0, 00:09:26.887 "w_mbytes_per_sec": 0 00:09:26.887 }, 00:09:26.887 "claimed": false, 00:09:26.887 "zoned": false, 00:09:26.887 "supported_io_types": { 00:09:26.887 "read": true, 00:09:26.887 "write": true, 00:09:26.887 "unmap": true, 00:09:26.887 "flush": true, 00:09:26.887 "reset": true, 00:09:26.887 "nvme_admin": false, 00:09:26.887 "nvme_io": false, 00:09:26.887 "nvme_io_md": false, 00:09:26.887 "write_zeroes": true, 00:09:26.887 "zcopy": true, 00:09:26.887 "get_zone_info": false, 00:09:26.887 "zone_management": false, 00:09:26.887 "zone_append": false, 00:09:26.887 "compare": false, 00:09:26.887 "compare_and_write": false, 00:09:26.887 "abort": true, 00:09:26.887 "seek_hole": false, 00:09:26.887 "seek_data": false, 00:09:26.887 "copy": true, 00:09:26.887 "nvme_iov_md": false 00:09:26.887 }, 00:09:26.887 "memory_domains": [ 00:09:26.887 { 00:09:26.887 "dma_device_id": "system", 00:09:26.887 "dma_device_type": 1 00:09:26.887 }, 00:09:26.887 { 00:09:26.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.887 "dma_device_type": 2 00:09:26.887 } 00:09:26.887 ], 00:09:26.887 "driver_specific": {} 00:09:26.887 } 00:09:26.887 ] 00:09:26.887 09:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.887 09:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:26.887 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:26.887 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:26.887 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:26.887 09:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.887 09:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.887 [2024-12-06 09:46:52.092432] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:26.888 [2024-12-06 09:46:52.092492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:26.888 [2024-12-06 09:46:52.092513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:26.888 [2024-12-06 09:46:52.094316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:26.888 09:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.888 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:26.888 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.888 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.888 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.888 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.888 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.888 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.888 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.888 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.888 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.888 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.888 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.888 09:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.888 09:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.888 09:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.888 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.888 "name": "Existed_Raid", 00:09:26.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.888 "strip_size_kb": 0, 00:09:26.888 "state": "configuring", 00:09:26.888 "raid_level": "raid1", 00:09:26.888 "superblock": false, 00:09:26.888 "num_base_bdevs": 3, 00:09:26.888 "num_base_bdevs_discovered": 2, 00:09:26.888 "num_base_bdevs_operational": 3, 00:09:26.888 "base_bdevs_list": [ 00:09:26.888 { 00:09:26.888 "name": "BaseBdev1", 00:09:26.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.888 "is_configured": false, 00:09:26.888 "data_offset": 0, 00:09:26.888 "data_size": 0 00:09:26.888 }, 00:09:26.888 { 00:09:26.888 "name": "BaseBdev2", 00:09:26.888 "uuid": "89842a09-b97a-4303-b516-d8e37224d163", 00:09:26.888 "is_configured": true, 00:09:26.888 "data_offset": 0, 00:09:26.888 "data_size": 65536 00:09:26.888 }, 00:09:26.888 { 00:09:26.888 "name": "BaseBdev3", 00:09:26.888 "uuid": "7965755c-3654-4ef0-a7c8-33ba4c315503", 00:09:26.888 "is_configured": true, 00:09:26.888 "data_offset": 0, 00:09:26.888 "data_size": 65536 00:09:26.888 } 00:09:26.888 ] 00:09:26.888 }' 00:09:26.888 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.888 09:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.456 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:27.456 09:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.456 09:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.456 [2024-12-06 09:46:52.567641] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:27.456 09:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.456 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:27.456 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.456 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.456 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.456 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.456 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.456 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.456 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.456 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.456 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.456 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.456 09:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.456 09:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.456 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.456 09:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.456 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.456 "name": "Existed_Raid", 00:09:27.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.457 "strip_size_kb": 0, 00:09:27.457 "state": "configuring", 00:09:27.457 "raid_level": "raid1", 00:09:27.457 "superblock": false, 00:09:27.457 "num_base_bdevs": 3, 00:09:27.457 "num_base_bdevs_discovered": 1, 00:09:27.457 "num_base_bdevs_operational": 3, 00:09:27.457 "base_bdevs_list": [ 00:09:27.457 { 00:09:27.457 "name": "BaseBdev1", 00:09:27.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.457 "is_configured": false, 00:09:27.457 "data_offset": 0, 00:09:27.457 "data_size": 0 00:09:27.457 }, 00:09:27.457 { 00:09:27.457 "name": null, 00:09:27.457 "uuid": "89842a09-b97a-4303-b516-d8e37224d163", 00:09:27.457 "is_configured": false, 00:09:27.457 "data_offset": 0, 00:09:27.457 "data_size": 65536 00:09:27.457 }, 00:09:27.457 { 00:09:27.457 "name": "BaseBdev3", 00:09:27.457 "uuid": "7965755c-3654-4ef0-a7c8-33ba4c315503", 00:09:27.457 "is_configured": true, 00:09:27.457 "data_offset": 0, 00:09:27.457 "data_size": 65536 00:09:27.457 } 00:09:27.457 ] 00:09:27.457 }' 00:09:27.457 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.457 09:46:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.027 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.027 09:46:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.027 [2024-12-06 09:46:53.088451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:28.027 BaseBdev1 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.027 [ 00:09:28.027 { 00:09:28.027 "name": "BaseBdev1", 00:09:28.027 "aliases": [ 00:09:28.027 "d5a15e38-f54c-42ab-9055-d7c2f547d8c3" 00:09:28.027 ], 00:09:28.027 "product_name": "Malloc disk", 00:09:28.027 "block_size": 512, 00:09:28.027 "num_blocks": 65536, 00:09:28.027 "uuid": "d5a15e38-f54c-42ab-9055-d7c2f547d8c3", 00:09:28.027 "assigned_rate_limits": { 00:09:28.027 "rw_ios_per_sec": 0, 00:09:28.027 "rw_mbytes_per_sec": 0, 00:09:28.027 "r_mbytes_per_sec": 0, 00:09:28.027 "w_mbytes_per_sec": 0 00:09:28.027 }, 00:09:28.027 "claimed": true, 00:09:28.027 "claim_type": "exclusive_write", 00:09:28.027 "zoned": false, 00:09:28.027 "supported_io_types": { 00:09:28.027 "read": true, 00:09:28.027 "write": true, 00:09:28.027 "unmap": true, 00:09:28.027 "flush": true, 00:09:28.027 "reset": true, 00:09:28.027 "nvme_admin": false, 00:09:28.027 "nvme_io": false, 00:09:28.027 "nvme_io_md": false, 00:09:28.027 "write_zeroes": true, 00:09:28.027 "zcopy": true, 00:09:28.027 "get_zone_info": false, 00:09:28.027 "zone_management": false, 00:09:28.027 "zone_append": false, 00:09:28.027 "compare": false, 00:09:28.027 "compare_and_write": false, 00:09:28.027 "abort": true, 00:09:28.027 "seek_hole": false, 00:09:28.027 "seek_data": false, 00:09:28.027 "copy": true, 00:09:28.027 "nvme_iov_md": false 00:09:28.027 }, 00:09:28.027 "memory_domains": [ 00:09:28.027 { 00:09:28.027 "dma_device_id": "system", 00:09:28.027 "dma_device_type": 1 00:09:28.027 }, 00:09:28.027 { 00:09:28.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.027 "dma_device_type": 2 00:09:28.027 } 00:09:28.027 ], 00:09:28.027 "driver_specific": {} 00:09:28.027 } 00:09:28.027 ] 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.027 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.027 "name": "Existed_Raid", 00:09:28.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.027 "strip_size_kb": 0, 00:09:28.027 "state": "configuring", 00:09:28.027 "raid_level": "raid1", 00:09:28.027 "superblock": false, 00:09:28.027 "num_base_bdevs": 3, 00:09:28.027 "num_base_bdevs_discovered": 2, 00:09:28.027 "num_base_bdevs_operational": 3, 00:09:28.027 "base_bdevs_list": [ 00:09:28.027 { 00:09:28.027 "name": "BaseBdev1", 00:09:28.027 "uuid": "d5a15e38-f54c-42ab-9055-d7c2f547d8c3", 00:09:28.027 "is_configured": true, 00:09:28.027 "data_offset": 0, 00:09:28.027 "data_size": 65536 00:09:28.027 }, 00:09:28.027 { 00:09:28.027 "name": null, 00:09:28.027 "uuid": "89842a09-b97a-4303-b516-d8e37224d163", 00:09:28.028 "is_configured": false, 00:09:28.028 "data_offset": 0, 00:09:28.028 "data_size": 65536 00:09:28.028 }, 00:09:28.028 { 00:09:28.028 "name": "BaseBdev3", 00:09:28.028 "uuid": "7965755c-3654-4ef0-a7c8-33ba4c315503", 00:09:28.028 "is_configured": true, 00:09:28.028 "data_offset": 0, 00:09:28.028 "data_size": 65536 00:09:28.028 } 00:09:28.028 ] 00:09:28.028 }' 00:09:28.028 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.028 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.287 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:28.287 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.287 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.287 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.287 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.287 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:28.287 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:28.287 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.287 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.547 [2024-12-06 09:46:53.563718] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:28.547 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.547 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:28.547 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.547 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.547 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.547 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.547 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.547 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.547 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.547 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.547 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.547 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.547 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.547 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.547 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.547 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.547 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.547 "name": "Existed_Raid", 00:09:28.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.547 "strip_size_kb": 0, 00:09:28.547 "state": "configuring", 00:09:28.547 "raid_level": "raid1", 00:09:28.547 "superblock": false, 00:09:28.547 "num_base_bdevs": 3, 00:09:28.547 "num_base_bdevs_discovered": 1, 00:09:28.547 "num_base_bdevs_operational": 3, 00:09:28.547 "base_bdevs_list": [ 00:09:28.547 { 00:09:28.547 "name": "BaseBdev1", 00:09:28.547 "uuid": "d5a15e38-f54c-42ab-9055-d7c2f547d8c3", 00:09:28.547 "is_configured": true, 00:09:28.547 "data_offset": 0, 00:09:28.547 "data_size": 65536 00:09:28.547 }, 00:09:28.547 { 00:09:28.547 "name": null, 00:09:28.547 "uuid": "89842a09-b97a-4303-b516-d8e37224d163", 00:09:28.547 "is_configured": false, 00:09:28.547 "data_offset": 0, 00:09:28.547 "data_size": 65536 00:09:28.547 }, 00:09:28.547 { 00:09:28.547 "name": null, 00:09:28.547 "uuid": "7965755c-3654-4ef0-a7c8-33ba4c315503", 00:09:28.547 "is_configured": false, 00:09:28.547 "data_offset": 0, 00:09:28.547 "data_size": 65536 00:09:28.547 } 00:09:28.547 ] 00:09:28.547 }' 00:09:28.547 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.547 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.866 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:28.866 09:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.866 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.866 09:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.866 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.866 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:28.866 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:28.866 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.866 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.866 [2024-12-06 09:46:54.026971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:28.866 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.866 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:28.866 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.866 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.866 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.866 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.866 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.866 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.866 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.866 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.866 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.866 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.866 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.866 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.866 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.866 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.866 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.866 "name": "Existed_Raid", 00:09:28.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.866 "strip_size_kb": 0, 00:09:28.866 "state": "configuring", 00:09:28.866 "raid_level": "raid1", 00:09:28.866 "superblock": false, 00:09:28.866 "num_base_bdevs": 3, 00:09:28.866 "num_base_bdevs_discovered": 2, 00:09:28.866 "num_base_bdevs_operational": 3, 00:09:28.866 "base_bdevs_list": [ 00:09:28.866 { 00:09:28.867 "name": "BaseBdev1", 00:09:28.867 "uuid": "d5a15e38-f54c-42ab-9055-d7c2f547d8c3", 00:09:28.867 "is_configured": true, 00:09:28.867 "data_offset": 0, 00:09:28.867 "data_size": 65536 00:09:28.867 }, 00:09:28.867 { 00:09:28.867 "name": null, 00:09:28.867 "uuid": "89842a09-b97a-4303-b516-d8e37224d163", 00:09:28.867 "is_configured": false, 00:09:28.867 "data_offset": 0, 00:09:28.867 "data_size": 65536 00:09:28.867 }, 00:09:28.867 { 00:09:28.867 "name": "BaseBdev3", 00:09:28.867 "uuid": "7965755c-3654-4ef0-a7c8-33ba4c315503", 00:09:28.867 "is_configured": true, 00:09:28.867 "data_offset": 0, 00:09:28.867 "data_size": 65536 00:09:28.867 } 00:09:28.867 ] 00:09:28.867 }' 00:09:28.867 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.867 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.458 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:29.458 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.458 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.458 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.458 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.458 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:29.458 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:29.458 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.458 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.458 [2024-12-06 09:46:54.478253] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:29.458 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.458 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:29.458 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.458 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.458 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.458 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.458 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.458 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.458 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.458 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.458 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.458 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.458 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.458 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.458 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.458 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.458 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.458 "name": "Existed_Raid", 00:09:29.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.458 "strip_size_kb": 0, 00:09:29.458 "state": "configuring", 00:09:29.458 "raid_level": "raid1", 00:09:29.458 "superblock": false, 00:09:29.458 "num_base_bdevs": 3, 00:09:29.458 "num_base_bdevs_discovered": 1, 00:09:29.458 "num_base_bdevs_operational": 3, 00:09:29.458 "base_bdevs_list": [ 00:09:29.458 { 00:09:29.458 "name": null, 00:09:29.458 "uuid": "d5a15e38-f54c-42ab-9055-d7c2f547d8c3", 00:09:29.458 "is_configured": false, 00:09:29.458 "data_offset": 0, 00:09:29.458 "data_size": 65536 00:09:29.458 }, 00:09:29.458 { 00:09:29.458 "name": null, 00:09:29.458 "uuid": "89842a09-b97a-4303-b516-d8e37224d163", 00:09:29.458 "is_configured": false, 00:09:29.458 "data_offset": 0, 00:09:29.458 "data_size": 65536 00:09:29.458 }, 00:09:29.458 { 00:09:29.458 "name": "BaseBdev3", 00:09:29.458 "uuid": "7965755c-3654-4ef0-a7c8-33ba4c315503", 00:09:29.458 "is_configured": true, 00:09:29.458 "data_offset": 0, 00:09:29.458 "data_size": 65536 00:09:29.458 } 00:09:29.458 ] 00:09:29.458 }' 00:09:29.458 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.458 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.718 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:29.718 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.718 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.718 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.718 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.978 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:29.978 09:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:29.978 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.978 09:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.978 [2024-12-06 09:46:55.003465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:29.978 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.978 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:29.978 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.978 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.978 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.978 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.978 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.978 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.978 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.978 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.978 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.978 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.978 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.978 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.978 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.978 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.978 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.978 "name": "Existed_Raid", 00:09:29.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.978 "strip_size_kb": 0, 00:09:29.978 "state": "configuring", 00:09:29.978 "raid_level": "raid1", 00:09:29.978 "superblock": false, 00:09:29.978 "num_base_bdevs": 3, 00:09:29.978 "num_base_bdevs_discovered": 2, 00:09:29.978 "num_base_bdevs_operational": 3, 00:09:29.978 "base_bdevs_list": [ 00:09:29.978 { 00:09:29.978 "name": null, 00:09:29.978 "uuid": "d5a15e38-f54c-42ab-9055-d7c2f547d8c3", 00:09:29.978 "is_configured": false, 00:09:29.978 "data_offset": 0, 00:09:29.978 "data_size": 65536 00:09:29.978 }, 00:09:29.978 { 00:09:29.978 "name": "BaseBdev2", 00:09:29.978 "uuid": "89842a09-b97a-4303-b516-d8e37224d163", 00:09:29.978 "is_configured": true, 00:09:29.978 "data_offset": 0, 00:09:29.978 "data_size": 65536 00:09:29.978 }, 00:09:29.978 { 00:09:29.978 "name": "BaseBdev3", 00:09:29.978 "uuid": "7965755c-3654-4ef0-a7c8-33ba4c315503", 00:09:29.978 "is_configured": true, 00:09:29.978 "data_offset": 0, 00:09:29.978 "data_size": 65536 00:09:29.978 } 00:09:29.978 ] 00:09:29.978 }' 00:09:29.978 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.978 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.237 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.237 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:30.237 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.237 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.237 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.497 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:30.497 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.497 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:30.497 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.497 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.497 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.497 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d5a15e38-f54c-42ab-9055-d7c2f547d8c3 00:09:30.497 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.497 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.497 [2024-12-06 09:46:55.607786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:30.497 [2024-12-06 09:46:55.607851] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:30.497 [2024-12-06 09:46:55.607859] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:30.497 [2024-12-06 09:46:55.608115] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:30.497 [2024-12-06 09:46:55.608295] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:30.497 [2024-12-06 09:46:55.608315] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:30.497 [2024-12-06 09:46:55.608540] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.497 NewBaseBdev 00:09:30.497 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.497 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:30.497 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:30.497 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:30.497 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:30.497 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:30.497 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:30.498 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:30.498 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.498 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.498 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.498 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:30.498 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.498 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.498 [ 00:09:30.498 { 00:09:30.498 "name": "NewBaseBdev", 00:09:30.498 "aliases": [ 00:09:30.498 "d5a15e38-f54c-42ab-9055-d7c2f547d8c3" 00:09:30.498 ], 00:09:30.498 "product_name": "Malloc disk", 00:09:30.498 "block_size": 512, 00:09:30.498 "num_blocks": 65536, 00:09:30.498 "uuid": "d5a15e38-f54c-42ab-9055-d7c2f547d8c3", 00:09:30.498 "assigned_rate_limits": { 00:09:30.498 "rw_ios_per_sec": 0, 00:09:30.498 "rw_mbytes_per_sec": 0, 00:09:30.498 "r_mbytes_per_sec": 0, 00:09:30.498 "w_mbytes_per_sec": 0 00:09:30.498 }, 00:09:30.498 "claimed": true, 00:09:30.498 "claim_type": "exclusive_write", 00:09:30.498 "zoned": false, 00:09:30.498 "supported_io_types": { 00:09:30.498 "read": true, 00:09:30.498 "write": true, 00:09:30.498 "unmap": true, 00:09:30.498 "flush": true, 00:09:30.498 "reset": true, 00:09:30.498 "nvme_admin": false, 00:09:30.498 "nvme_io": false, 00:09:30.498 "nvme_io_md": false, 00:09:30.498 "write_zeroes": true, 00:09:30.498 "zcopy": true, 00:09:30.498 "get_zone_info": false, 00:09:30.498 "zone_management": false, 00:09:30.498 "zone_append": false, 00:09:30.498 "compare": false, 00:09:30.498 "compare_and_write": false, 00:09:30.498 "abort": true, 00:09:30.498 "seek_hole": false, 00:09:30.498 "seek_data": false, 00:09:30.498 "copy": true, 00:09:30.498 "nvme_iov_md": false 00:09:30.498 }, 00:09:30.498 "memory_domains": [ 00:09:30.498 { 00:09:30.498 "dma_device_id": "system", 00:09:30.498 "dma_device_type": 1 00:09:30.498 }, 00:09:30.498 { 00:09:30.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.498 "dma_device_type": 2 00:09:30.498 } 00:09:30.498 ], 00:09:30.498 "driver_specific": {} 00:09:30.498 } 00:09:30.498 ] 00:09:30.498 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.498 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:30.498 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:30.498 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.498 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:30.498 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.498 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.498 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.498 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.498 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.498 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.498 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.498 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.498 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.498 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.498 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.498 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.498 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.498 "name": "Existed_Raid", 00:09:30.498 "uuid": "005454b3-0b13-40c7-9f3f-3be223768b08", 00:09:30.498 "strip_size_kb": 0, 00:09:30.498 "state": "online", 00:09:30.498 "raid_level": "raid1", 00:09:30.498 "superblock": false, 00:09:30.498 "num_base_bdevs": 3, 00:09:30.498 "num_base_bdevs_discovered": 3, 00:09:30.498 "num_base_bdevs_operational": 3, 00:09:30.498 "base_bdevs_list": [ 00:09:30.498 { 00:09:30.498 "name": "NewBaseBdev", 00:09:30.498 "uuid": "d5a15e38-f54c-42ab-9055-d7c2f547d8c3", 00:09:30.498 "is_configured": true, 00:09:30.498 "data_offset": 0, 00:09:30.498 "data_size": 65536 00:09:30.498 }, 00:09:30.498 { 00:09:30.498 "name": "BaseBdev2", 00:09:30.498 "uuid": "89842a09-b97a-4303-b516-d8e37224d163", 00:09:30.498 "is_configured": true, 00:09:30.498 "data_offset": 0, 00:09:30.498 "data_size": 65536 00:09:30.498 }, 00:09:30.498 { 00:09:30.498 "name": "BaseBdev3", 00:09:30.498 "uuid": "7965755c-3654-4ef0-a7c8-33ba4c315503", 00:09:30.498 "is_configured": true, 00:09:30.498 "data_offset": 0, 00:09:30.498 "data_size": 65536 00:09:30.498 } 00:09:30.498 ] 00:09:30.498 }' 00:09:30.498 09:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.498 09:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.066 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:31.066 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:31.066 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:31.066 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:31.066 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:31.066 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:31.066 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:31.066 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:31.066 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.066 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.066 [2024-12-06 09:46:56.091299] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:31.066 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.066 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:31.066 "name": "Existed_Raid", 00:09:31.066 "aliases": [ 00:09:31.066 "005454b3-0b13-40c7-9f3f-3be223768b08" 00:09:31.066 ], 00:09:31.066 "product_name": "Raid Volume", 00:09:31.066 "block_size": 512, 00:09:31.066 "num_blocks": 65536, 00:09:31.066 "uuid": "005454b3-0b13-40c7-9f3f-3be223768b08", 00:09:31.066 "assigned_rate_limits": { 00:09:31.066 "rw_ios_per_sec": 0, 00:09:31.066 "rw_mbytes_per_sec": 0, 00:09:31.066 "r_mbytes_per_sec": 0, 00:09:31.066 "w_mbytes_per_sec": 0 00:09:31.066 }, 00:09:31.066 "claimed": false, 00:09:31.066 "zoned": false, 00:09:31.066 "supported_io_types": { 00:09:31.066 "read": true, 00:09:31.066 "write": true, 00:09:31.066 "unmap": false, 00:09:31.066 "flush": false, 00:09:31.066 "reset": true, 00:09:31.066 "nvme_admin": false, 00:09:31.066 "nvme_io": false, 00:09:31.066 "nvme_io_md": false, 00:09:31.066 "write_zeroes": true, 00:09:31.066 "zcopy": false, 00:09:31.066 "get_zone_info": false, 00:09:31.066 "zone_management": false, 00:09:31.066 "zone_append": false, 00:09:31.066 "compare": false, 00:09:31.066 "compare_and_write": false, 00:09:31.066 "abort": false, 00:09:31.066 "seek_hole": false, 00:09:31.066 "seek_data": false, 00:09:31.066 "copy": false, 00:09:31.066 "nvme_iov_md": false 00:09:31.066 }, 00:09:31.066 "memory_domains": [ 00:09:31.066 { 00:09:31.066 "dma_device_id": "system", 00:09:31.066 "dma_device_type": 1 00:09:31.066 }, 00:09:31.066 { 00:09:31.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.066 "dma_device_type": 2 00:09:31.066 }, 00:09:31.066 { 00:09:31.066 "dma_device_id": "system", 00:09:31.066 "dma_device_type": 1 00:09:31.066 }, 00:09:31.066 { 00:09:31.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.066 "dma_device_type": 2 00:09:31.066 }, 00:09:31.066 { 00:09:31.066 "dma_device_id": "system", 00:09:31.066 "dma_device_type": 1 00:09:31.066 }, 00:09:31.066 { 00:09:31.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.066 "dma_device_type": 2 00:09:31.066 } 00:09:31.066 ], 00:09:31.066 "driver_specific": { 00:09:31.066 "raid": { 00:09:31.066 "uuid": "005454b3-0b13-40c7-9f3f-3be223768b08", 00:09:31.066 "strip_size_kb": 0, 00:09:31.066 "state": "online", 00:09:31.066 "raid_level": "raid1", 00:09:31.066 "superblock": false, 00:09:31.066 "num_base_bdevs": 3, 00:09:31.066 "num_base_bdevs_discovered": 3, 00:09:31.066 "num_base_bdevs_operational": 3, 00:09:31.066 "base_bdevs_list": [ 00:09:31.066 { 00:09:31.066 "name": "NewBaseBdev", 00:09:31.066 "uuid": "d5a15e38-f54c-42ab-9055-d7c2f547d8c3", 00:09:31.066 "is_configured": true, 00:09:31.066 "data_offset": 0, 00:09:31.066 "data_size": 65536 00:09:31.066 }, 00:09:31.066 { 00:09:31.066 "name": "BaseBdev2", 00:09:31.066 "uuid": "89842a09-b97a-4303-b516-d8e37224d163", 00:09:31.066 "is_configured": true, 00:09:31.066 "data_offset": 0, 00:09:31.066 "data_size": 65536 00:09:31.066 }, 00:09:31.066 { 00:09:31.066 "name": "BaseBdev3", 00:09:31.066 "uuid": "7965755c-3654-4ef0-a7c8-33ba4c315503", 00:09:31.066 "is_configured": true, 00:09:31.066 "data_offset": 0, 00:09:31.066 "data_size": 65536 00:09:31.066 } 00:09:31.066 ] 00:09:31.066 } 00:09:31.066 } 00:09:31.066 }' 00:09:31.066 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:31.066 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:31.066 BaseBdev2 00:09:31.066 BaseBdev3' 00:09:31.066 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.067 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:31.067 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.067 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:31.067 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.067 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.067 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.067 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.067 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.067 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.067 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.067 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:31.067 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.067 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.067 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.067 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.067 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.067 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.067 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.067 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.067 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:31.067 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.067 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.067 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.326 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.326 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.326 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:31.326 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.326 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.326 [2024-12-06 09:46:56.358551] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:31.326 [2024-12-06 09:46:56.358589] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:31.326 [2024-12-06 09:46:56.358687] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:31.326 [2024-12-06 09:46:56.358964] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:31.326 [2024-12-06 09:46:56.358981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:31.326 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.326 09:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67347 00:09:31.326 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67347 ']' 00:09:31.326 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67347 00:09:31.326 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:31.326 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.326 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67347 00:09:31.326 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:31.326 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:31.326 killing process with pid 67347 00:09:31.326 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67347' 00:09:31.326 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67347 00:09:31.326 [2024-12-06 09:46:56.403456] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:31.326 09:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67347 00:09:31.586 [2024-12-06 09:46:56.703160] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:32.967 00:09:32.967 real 0m10.380s 00:09:32.967 user 0m16.555s 00:09:32.967 sys 0m1.683s 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.967 ************************************ 00:09:32.967 END TEST raid_state_function_test 00:09:32.967 ************************************ 00:09:32.967 09:46:57 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:32.967 09:46:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:32.967 09:46:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.967 09:46:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:32.967 ************************************ 00:09:32.967 START TEST raid_state_function_test_sb 00:09:32.967 ************************************ 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=67968 00:09:32.967 Process raid pid: 67968 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67968' 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 67968 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 67968 ']' 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.967 09:46:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.967 [2024-12-06 09:46:58.002610] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:09:32.967 [2024-12-06 09:46:58.002735] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.967 [2024-12-06 09:46:58.159852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.227 [2024-12-06 09:46:58.272940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.227 [2024-12-06 09:46:58.484423] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.227 [2024-12-06 09:46:58.484456] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.795 09:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:33.795 09:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:33.795 09:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:33.795 09:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.795 09:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.795 [2024-12-06 09:46:58.837263] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:33.795 [2024-12-06 09:46:58.837337] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:33.795 [2024-12-06 09:46:58.837354] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:33.795 [2024-12-06 09:46:58.837366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:33.795 [2024-12-06 09:46:58.837373] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:33.795 [2024-12-06 09:46:58.837384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:33.795 09:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.795 09:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:33.795 09:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.795 09:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.795 09:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.795 09:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.795 09:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.795 09:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.795 09:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.795 09:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.795 09:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.795 09:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.795 09:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.795 09:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.795 09:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.795 09:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.795 09:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.795 "name": "Existed_Raid", 00:09:33.795 "uuid": "e34de4e1-3d26-42e8-9807-7738045dc3d9", 00:09:33.795 "strip_size_kb": 0, 00:09:33.795 "state": "configuring", 00:09:33.795 "raid_level": "raid1", 00:09:33.795 "superblock": true, 00:09:33.795 "num_base_bdevs": 3, 00:09:33.795 "num_base_bdevs_discovered": 0, 00:09:33.795 "num_base_bdevs_operational": 3, 00:09:33.795 "base_bdevs_list": [ 00:09:33.795 { 00:09:33.795 "name": "BaseBdev1", 00:09:33.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.795 "is_configured": false, 00:09:33.795 "data_offset": 0, 00:09:33.795 "data_size": 0 00:09:33.795 }, 00:09:33.795 { 00:09:33.795 "name": "BaseBdev2", 00:09:33.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.795 "is_configured": false, 00:09:33.795 "data_offset": 0, 00:09:33.795 "data_size": 0 00:09:33.795 }, 00:09:33.795 { 00:09:33.795 "name": "BaseBdev3", 00:09:33.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.795 "is_configured": false, 00:09:33.795 "data_offset": 0, 00:09:33.795 "data_size": 0 00:09:33.795 } 00:09:33.795 ] 00:09:33.795 }' 00:09:33.795 09:46:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.795 09:46:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.055 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:34.055 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.055 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.055 [2024-12-06 09:46:59.244518] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:34.055 [2024-12-06 09:46:59.244562] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:34.055 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.055 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:34.055 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.055 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.055 [2024-12-06 09:46:59.256515] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:34.055 [2024-12-06 09:46:59.256565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:34.055 [2024-12-06 09:46:59.256575] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:34.055 [2024-12-06 09:46:59.256584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:34.055 [2024-12-06 09:46:59.256591] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:34.055 [2024-12-06 09:46:59.256599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:34.055 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.055 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:34.055 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.055 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.055 [2024-12-06 09:46:59.304968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:34.055 BaseBdev1 00:09:34.055 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.055 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:34.055 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:34.055 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:34.055 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:34.055 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:34.055 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:34.055 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:34.055 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.055 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.055 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.055 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:34.055 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.055 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.314 [ 00:09:34.314 { 00:09:34.314 "name": "BaseBdev1", 00:09:34.314 "aliases": [ 00:09:34.314 "cbbdc8b7-b984-495c-bb80-08759f374b55" 00:09:34.314 ], 00:09:34.314 "product_name": "Malloc disk", 00:09:34.314 "block_size": 512, 00:09:34.314 "num_blocks": 65536, 00:09:34.314 "uuid": "cbbdc8b7-b984-495c-bb80-08759f374b55", 00:09:34.314 "assigned_rate_limits": { 00:09:34.314 "rw_ios_per_sec": 0, 00:09:34.314 "rw_mbytes_per_sec": 0, 00:09:34.314 "r_mbytes_per_sec": 0, 00:09:34.314 "w_mbytes_per_sec": 0 00:09:34.314 }, 00:09:34.314 "claimed": true, 00:09:34.314 "claim_type": "exclusive_write", 00:09:34.314 "zoned": false, 00:09:34.314 "supported_io_types": { 00:09:34.314 "read": true, 00:09:34.314 "write": true, 00:09:34.314 "unmap": true, 00:09:34.314 "flush": true, 00:09:34.314 "reset": true, 00:09:34.314 "nvme_admin": false, 00:09:34.314 "nvme_io": false, 00:09:34.314 "nvme_io_md": false, 00:09:34.314 "write_zeroes": true, 00:09:34.314 "zcopy": true, 00:09:34.314 "get_zone_info": false, 00:09:34.314 "zone_management": false, 00:09:34.314 "zone_append": false, 00:09:34.314 "compare": false, 00:09:34.314 "compare_and_write": false, 00:09:34.314 "abort": true, 00:09:34.314 "seek_hole": false, 00:09:34.314 "seek_data": false, 00:09:34.314 "copy": true, 00:09:34.314 "nvme_iov_md": false 00:09:34.314 }, 00:09:34.314 "memory_domains": [ 00:09:34.314 { 00:09:34.314 "dma_device_id": "system", 00:09:34.314 "dma_device_type": 1 00:09:34.314 }, 00:09:34.314 { 00:09:34.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.314 "dma_device_type": 2 00:09:34.314 } 00:09:34.314 ], 00:09:34.314 "driver_specific": {} 00:09:34.314 } 00:09:34.314 ] 00:09:34.314 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.314 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:34.314 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:34.314 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.314 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.314 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.314 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.314 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.314 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.314 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.314 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.314 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.314 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.314 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.314 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.314 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.314 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.314 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.314 "name": "Existed_Raid", 00:09:34.314 "uuid": "2fb00f1d-f558-4371-93e2-bffc48d021e5", 00:09:34.314 "strip_size_kb": 0, 00:09:34.314 "state": "configuring", 00:09:34.314 "raid_level": "raid1", 00:09:34.314 "superblock": true, 00:09:34.314 "num_base_bdevs": 3, 00:09:34.314 "num_base_bdevs_discovered": 1, 00:09:34.314 "num_base_bdevs_operational": 3, 00:09:34.314 "base_bdevs_list": [ 00:09:34.314 { 00:09:34.314 "name": "BaseBdev1", 00:09:34.314 "uuid": "cbbdc8b7-b984-495c-bb80-08759f374b55", 00:09:34.314 "is_configured": true, 00:09:34.314 "data_offset": 2048, 00:09:34.314 "data_size": 63488 00:09:34.314 }, 00:09:34.314 { 00:09:34.314 "name": "BaseBdev2", 00:09:34.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.314 "is_configured": false, 00:09:34.314 "data_offset": 0, 00:09:34.314 "data_size": 0 00:09:34.314 }, 00:09:34.314 { 00:09:34.314 "name": "BaseBdev3", 00:09:34.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.314 "is_configured": false, 00:09:34.314 "data_offset": 0, 00:09:34.314 "data_size": 0 00:09:34.314 } 00:09:34.314 ] 00:09:34.314 }' 00:09:34.314 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.314 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.574 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:34.574 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.574 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.574 [2024-12-06 09:46:59.764269] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:34.574 [2024-12-06 09:46:59.764333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:34.574 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.574 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:34.574 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.574 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.574 [2024-12-06 09:46:59.776279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:34.574 [2024-12-06 09:46:59.778213] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:34.574 [2024-12-06 09:46:59.778256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:34.574 [2024-12-06 09:46:59.778267] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:34.574 [2024-12-06 09:46:59.778275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:34.574 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.574 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:34.574 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.574 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:34.574 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.574 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.574 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.574 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.574 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.574 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.574 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.574 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.574 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.574 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.574 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.574 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.574 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.574 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.574 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.574 "name": "Existed_Raid", 00:09:34.574 "uuid": "6e2f2195-ed74-40fb-892f-6373570ed8cf", 00:09:34.574 "strip_size_kb": 0, 00:09:34.574 "state": "configuring", 00:09:34.574 "raid_level": "raid1", 00:09:34.574 "superblock": true, 00:09:34.574 "num_base_bdevs": 3, 00:09:34.574 "num_base_bdevs_discovered": 1, 00:09:34.574 "num_base_bdevs_operational": 3, 00:09:34.574 "base_bdevs_list": [ 00:09:34.574 { 00:09:34.574 "name": "BaseBdev1", 00:09:34.574 "uuid": "cbbdc8b7-b984-495c-bb80-08759f374b55", 00:09:34.574 "is_configured": true, 00:09:34.574 "data_offset": 2048, 00:09:34.574 "data_size": 63488 00:09:34.574 }, 00:09:34.574 { 00:09:34.574 "name": "BaseBdev2", 00:09:34.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.574 "is_configured": false, 00:09:34.574 "data_offset": 0, 00:09:34.574 "data_size": 0 00:09:34.574 }, 00:09:34.574 { 00:09:34.574 "name": "BaseBdev3", 00:09:34.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.574 "is_configured": false, 00:09:34.574 "data_offset": 0, 00:09:34.574 "data_size": 0 00:09:34.574 } 00:09:34.574 ] 00:09:34.574 }' 00:09:34.574 09:46:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.574 09:46:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.144 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:35.144 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.144 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.144 [2024-12-06 09:47:00.250587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:35.144 BaseBdev2 00:09:35.144 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.144 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:35.144 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:35.144 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:35.144 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:35.144 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:35.144 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:35.144 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:35.144 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.144 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.144 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.144 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:35.144 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.144 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.144 [ 00:09:35.144 { 00:09:35.144 "name": "BaseBdev2", 00:09:35.144 "aliases": [ 00:09:35.144 "fe35e4a4-fcab-44f9-9767-85d90f65bd5b" 00:09:35.144 ], 00:09:35.144 "product_name": "Malloc disk", 00:09:35.144 "block_size": 512, 00:09:35.144 "num_blocks": 65536, 00:09:35.144 "uuid": "fe35e4a4-fcab-44f9-9767-85d90f65bd5b", 00:09:35.144 "assigned_rate_limits": { 00:09:35.144 "rw_ios_per_sec": 0, 00:09:35.144 "rw_mbytes_per_sec": 0, 00:09:35.144 "r_mbytes_per_sec": 0, 00:09:35.144 "w_mbytes_per_sec": 0 00:09:35.144 }, 00:09:35.144 "claimed": true, 00:09:35.144 "claim_type": "exclusive_write", 00:09:35.144 "zoned": false, 00:09:35.144 "supported_io_types": { 00:09:35.144 "read": true, 00:09:35.144 "write": true, 00:09:35.144 "unmap": true, 00:09:35.144 "flush": true, 00:09:35.144 "reset": true, 00:09:35.144 "nvme_admin": false, 00:09:35.144 "nvme_io": false, 00:09:35.144 "nvme_io_md": false, 00:09:35.144 "write_zeroes": true, 00:09:35.144 "zcopy": true, 00:09:35.144 "get_zone_info": false, 00:09:35.144 "zone_management": false, 00:09:35.144 "zone_append": false, 00:09:35.144 "compare": false, 00:09:35.144 "compare_and_write": false, 00:09:35.144 "abort": true, 00:09:35.144 "seek_hole": false, 00:09:35.144 "seek_data": false, 00:09:35.144 "copy": true, 00:09:35.144 "nvme_iov_md": false 00:09:35.144 }, 00:09:35.144 "memory_domains": [ 00:09:35.144 { 00:09:35.144 "dma_device_id": "system", 00:09:35.144 "dma_device_type": 1 00:09:35.144 }, 00:09:35.144 { 00:09:35.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.144 "dma_device_type": 2 00:09:35.144 } 00:09:35.144 ], 00:09:35.144 "driver_specific": {} 00:09:35.144 } 00:09:35.144 ] 00:09:35.144 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.145 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:35.145 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:35.145 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:35.145 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:35.145 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.145 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.145 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.145 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.145 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.145 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.145 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.145 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.145 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.145 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.145 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.145 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.145 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.145 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.145 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.145 "name": "Existed_Raid", 00:09:35.145 "uuid": "6e2f2195-ed74-40fb-892f-6373570ed8cf", 00:09:35.145 "strip_size_kb": 0, 00:09:35.145 "state": "configuring", 00:09:35.145 "raid_level": "raid1", 00:09:35.145 "superblock": true, 00:09:35.145 "num_base_bdevs": 3, 00:09:35.145 "num_base_bdevs_discovered": 2, 00:09:35.145 "num_base_bdevs_operational": 3, 00:09:35.145 "base_bdevs_list": [ 00:09:35.145 { 00:09:35.145 "name": "BaseBdev1", 00:09:35.145 "uuid": "cbbdc8b7-b984-495c-bb80-08759f374b55", 00:09:35.145 "is_configured": true, 00:09:35.145 "data_offset": 2048, 00:09:35.145 "data_size": 63488 00:09:35.145 }, 00:09:35.145 { 00:09:35.145 "name": "BaseBdev2", 00:09:35.145 "uuid": "fe35e4a4-fcab-44f9-9767-85d90f65bd5b", 00:09:35.145 "is_configured": true, 00:09:35.145 "data_offset": 2048, 00:09:35.145 "data_size": 63488 00:09:35.145 }, 00:09:35.145 { 00:09:35.145 "name": "BaseBdev3", 00:09:35.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.145 "is_configured": false, 00:09:35.145 "data_offset": 0, 00:09:35.145 "data_size": 0 00:09:35.145 } 00:09:35.145 ] 00:09:35.145 }' 00:09:35.145 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.145 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.790 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:35.790 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.790 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.790 [2024-12-06 09:47:00.761365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:35.790 [2024-12-06 09:47:00.761656] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:35.790 [2024-12-06 09:47:00.761683] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:35.790 [2024-12-06 09:47:00.761957] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:35.790 [2024-12-06 09:47:00.762123] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:35.790 [2024-12-06 09:47:00.762139] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:35.790 BaseBdev3 00:09:35.790 [2024-12-06 09:47:00.762299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.790 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.790 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:35.790 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:35.790 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:35.790 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:35.791 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:35.791 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:35.791 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:35.791 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.791 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.791 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.791 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:35.791 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.791 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.791 [ 00:09:35.791 { 00:09:35.791 "name": "BaseBdev3", 00:09:35.791 "aliases": [ 00:09:35.791 "7c39f45d-bf80-4a61-ba1e-109567a7bf22" 00:09:35.791 ], 00:09:35.791 "product_name": "Malloc disk", 00:09:35.791 "block_size": 512, 00:09:35.791 "num_blocks": 65536, 00:09:35.791 "uuid": "7c39f45d-bf80-4a61-ba1e-109567a7bf22", 00:09:35.791 "assigned_rate_limits": { 00:09:35.791 "rw_ios_per_sec": 0, 00:09:35.791 "rw_mbytes_per_sec": 0, 00:09:35.791 "r_mbytes_per_sec": 0, 00:09:35.791 "w_mbytes_per_sec": 0 00:09:35.791 }, 00:09:35.791 "claimed": true, 00:09:35.791 "claim_type": "exclusive_write", 00:09:35.791 "zoned": false, 00:09:35.791 "supported_io_types": { 00:09:35.791 "read": true, 00:09:35.791 "write": true, 00:09:35.791 "unmap": true, 00:09:35.791 "flush": true, 00:09:35.791 "reset": true, 00:09:35.791 "nvme_admin": false, 00:09:35.791 "nvme_io": false, 00:09:35.791 "nvme_io_md": false, 00:09:35.791 "write_zeroes": true, 00:09:35.791 "zcopy": true, 00:09:35.791 "get_zone_info": false, 00:09:35.791 "zone_management": false, 00:09:35.791 "zone_append": false, 00:09:35.791 "compare": false, 00:09:35.791 "compare_and_write": false, 00:09:35.791 "abort": true, 00:09:35.791 "seek_hole": false, 00:09:35.791 "seek_data": false, 00:09:35.791 "copy": true, 00:09:35.791 "nvme_iov_md": false 00:09:35.791 }, 00:09:35.791 "memory_domains": [ 00:09:35.791 { 00:09:35.791 "dma_device_id": "system", 00:09:35.791 "dma_device_type": 1 00:09:35.791 }, 00:09:35.791 { 00:09:35.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.791 "dma_device_type": 2 00:09:35.791 } 00:09:35.791 ], 00:09:35.791 "driver_specific": {} 00:09:35.791 } 00:09:35.791 ] 00:09:35.791 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.791 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:35.791 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:35.791 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:35.791 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:35.791 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.791 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.791 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.791 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.791 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.791 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.791 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.791 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.791 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.791 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.791 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.791 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.791 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.791 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.791 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.791 "name": "Existed_Raid", 00:09:35.791 "uuid": "6e2f2195-ed74-40fb-892f-6373570ed8cf", 00:09:35.791 "strip_size_kb": 0, 00:09:35.791 "state": "online", 00:09:35.791 "raid_level": "raid1", 00:09:35.791 "superblock": true, 00:09:35.791 "num_base_bdevs": 3, 00:09:35.791 "num_base_bdevs_discovered": 3, 00:09:35.791 "num_base_bdevs_operational": 3, 00:09:35.791 "base_bdevs_list": [ 00:09:35.791 { 00:09:35.791 "name": "BaseBdev1", 00:09:35.791 "uuid": "cbbdc8b7-b984-495c-bb80-08759f374b55", 00:09:35.791 "is_configured": true, 00:09:35.791 "data_offset": 2048, 00:09:35.791 "data_size": 63488 00:09:35.791 }, 00:09:35.791 { 00:09:35.791 "name": "BaseBdev2", 00:09:35.791 "uuid": "fe35e4a4-fcab-44f9-9767-85d90f65bd5b", 00:09:35.791 "is_configured": true, 00:09:35.791 "data_offset": 2048, 00:09:35.791 "data_size": 63488 00:09:35.791 }, 00:09:35.791 { 00:09:35.791 "name": "BaseBdev3", 00:09:35.791 "uuid": "7c39f45d-bf80-4a61-ba1e-109567a7bf22", 00:09:35.791 "is_configured": true, 00:09:35.791 "data_offset": 2048, 00:09:35.791 "data_size": 63488 00:09:35.791 } 00:09:35.791 ] 00:09:35.791 }' 00:09:35.791 09:47:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.791 09:47:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.051 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:36.051 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:36.051 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:36.051 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:36.051 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:36.051 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:36.051 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:36.051 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:36.051 09:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.051 09:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.051 [2024-12-06 09:47:01.240954] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:36.051 09:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.051 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:36.051 "name": "Existed_Raid", 00:09:36.051 "aliases": [ 00:09:36.051 "6e2f2195-ed74-40fb-892f-6373570ed8cf" 00:09:36.051 ], 00:09:36.051 "product_name": "Raid Volume", 00:09:36.051 "block_size": 512, 00:09:36.051 "num_blocks": 63488, 00:09:36.051 "uuid": "6e2f2195-ed74-40fb-892f-6373570ed8cf", 00:09:36.051 "assigned_rate_limits": { 00:09:36.051 "rw_ios_per_sec": 0, 00:09:36.051 "rw_mbytes_per_sec": 0, 00:09:36.051 "r_mbytes_per_sec": 0, 00:09:36.051 "w_mbytes_per_sec": 0 00:09:36.051 }, 00:09:36.051 "claimed": false, 00:09:36.051 "zoned": false, 00:09:36.051 "supported_io_types": { 00:09:36.051 "read": true, 00:09:36.051 "write": true, 00:09:36.051 "unmap": false, 00:09:36.051 "flush": false, 00:09:36.051 "reset": true, 00:09:36.051 "nvme_admin": false, 00:09:36.051 "nvme_io": false, 00:09:36.051 "nvme_io_md": false, 00:09:36.051 "write_zeroes": true, 00:09:36.051 "zcopy": false, 00:09:36.051 "get_zone_info": false, 00:09:36.051 "zone_management": false, 00:09:36.051 "zone_append": false, 00:09:36.051 "compare": false, 00:09:36.051 "compare_and_write": false, 00:09:36.051 "abort": false, 00:09:36.051 "seek_hole": false, 00:09:36.051 "seek_data": false, 00:09:36.051 "copy": false, 00:09:36.051 "nvme_iov_md": false 00:09:36.051 }, 00:09:36.051 "memory_domains": [ 00:09:36.051 { 00:09:36.051 "dma_device_id": "system", 00:09:36.051 "dma_device_type": 1 00:09:36.051 }, 00:09:36.051 { 00:09:36.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.051 "dma_device_type": 2 00:09:36.051 }, 00:09:36.051 { 00:09:36.051 "dma_device_id": "system", 00:09:36.051 "dma_device_type": 1 00:09:36.051 }, 00:09:36.051 { 00:09:36.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.051 "dma_device_type": 2 00:09:36.051 }, 00:09:36.051 { 00:09:36.051 "dma_device_id": "system", 00:09:36.051 "dma_device_type": 1 00:09:36.051 }, 00:09:36.051 { 00:09:36.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.051 "dma_device_type": 2 00:09:36.051 } 00:09:36.051 ], 00:09:36.051 "driver_specific": { 00:09:36.051 "raid": { 00:09:36.051 "uuid": "6e2f2195-ed74-40fb-892f-6373570ed8cf", 00:09:36.051 "strip_size_kb": 0, 00:09:36.051 "state": "online", 00:09:36.051 "raid_level": "raid1", 00:09:36.051 "superblock": true, 00:09:36.051 "num_base_bdevs": 3, 00:09:36.051 "num_base_bdevs_discovered": 3, 00:09:36.051 "num_base_bdevs_operational": 3, 00:09:36.051 "base_bdevs_list": [ 00:09:36.051 { 00:09:36.051 "name": "BaseBdev1", 00:09:36.051 "uuid": "cbbdc8b7-b984-495c-bb80-08759f374b55", 00:09:36.051 "is_configured": true, 00:09:36.051 "data_offset": 2048, 00:09:36.051 "data_size": 63488 00:09:36.051 }, 00:09:36.051 { 00:09:36.051 "name": "BaseBdev2", 00:09:36.051 "uuid": "fe35e4a4-fcab-44f9-9767-85d90f65bd5b", 00:09:36.051 "is_configured": true, 00:09:36.051 "data_offset": 2048, 00:09:36.051 "data_size": 63488 00:09:36.051 }, 00:09:36.051 { 00:09:36.051 "name": "BaseBdev3", 00:09:36.051 "uuid": "7c39f45d-bf80-4a61-ba1e-109567a7bf22", 00:09:36.051 "is_configured": true, 00:09:36.051 "data_offset": 2048, 00:09:36.051 "data_size": 63488 00:09:36.051 } 00:09:36.051 ] 00:09:36.051 } 00:09:36.051 } 00:09:36.051 }' 00:09:36.051 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:36.311 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:36.311 BaseBdev2 00:09:36.311 BaseBdev3' 00:09:36.311 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.311 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:36.311 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.311 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:36.311 09:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.311 09:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.311 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.311 09:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.311 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.311 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.311 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.311 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.311 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:36.311 09:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.311 09:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.311 09:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.311 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.311 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.311 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.311 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:36.311 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.311 09:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.311 09:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.311 09:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.311 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.311 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.311 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:36.311 09:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.311 09:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.311 [2024-12-06 09:47:01.516232] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:36.571 09:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.571 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:36.571 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:36.571 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:36.571 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:36.571 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:36.571 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:36.571 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.571 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.571 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.571 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.571 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:36.571 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.571 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.571 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.571 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.571 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.571 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.571 09:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.571 09:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.571 09:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.571 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.571 "name": "Existed_Raid", 00:09:36.571 "uuid": "6e2f2195-ed74-40fb-892f-6373570ed8cf", 00:09:36.571 "strip_size_kb": 0, 00:09:36.571 "state": "online", 00:09:36.571 "raid_level": "raid1", 00:09:36.571 "superblock": true, 00:09:36.571 "num_base_bdevs": 3, 00:09:36.571 "num_base_bdevs_discovered": 2, 00:09:36.571 "num_base_bdevs_operational": 2, 00:09:36.571 "base_bdevs_list": [ 00:09:36.571 { 00:09:36.571 "name": null, 00:09:36.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.571 "is_configured": false, 00:09:36.571 "data_offset": 0, 00:09:36.571 "data_size": 63488 00:09:36.571 }, 00:09:36.571 { 00:09:36.571 "name": "BaseBdev2", 00:09:36.571 "uuid": "fe35e4a4-fcab-44f9-9767-85d90f65bd5b", 00:09:36.571 "is_configured": true, 00:09:36.571 "data_offset": 2048, 00:09:36.571 "data_size": 63488 00:09:36.571 }, 00:09:36.571 { 00:09:36.571 "name": "BaseBdev3", 00:09:36.571 "uuid": "7c39f45d-bf80-4a61-ba1e-109567a7bf22", 00:09:36.571 "is_configured": true, 00:09:36.571 "data_offset": 2048, 00:09:36.571 "data_size": 63488 00:09:36.571 } 00:09:36.571 ] 00:09:36.571 }' 00:09:36.571 09:47:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.571 09:47:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.831 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:36.831 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.831 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.831 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.831 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.831 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:36.831 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.831 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:36.831 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:36.831 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:36.831 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.831 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.831 [2024-12-06 09:47:02.080739] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:37.090 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.090 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:37.090 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:37.090 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:37.090 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.090 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.090 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.090 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.090 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:37.090 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:37.090 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:37.090 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.090 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.090 [2024-12-06 09:47:02.224841] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:37.090 [2024-12-06 09:47:02.224953] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:37.090 [2024-12-06 09:47:02.320657] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:37.090 [2024-12-06 09:47:02.320717] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:37.090 [2024-12-06 09:47:02.320731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:37.090 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.090 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:37.090 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:37.090 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:37.090 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.090 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.090 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.090 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.090 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:37.090 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:37.090 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.350 BaseBdev2 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.350 [ 00:09:37.350 { 00:09:37.350 "name": "BaseBdev2", 00:09:37.350 "aliases": [ 00:09:37.350 "4756cbe4-3394-4f33-96a3-2cb1af4caf9d" 00:09:37.350 ], 00:09:37.350 "product_name": "Malloc disk", 00:09:37.350 "block_size": 512, 00:09:37.350 "num_blocks": 65536, 00:09:37.350 "uuid": "4756cbe4-3394-4f33-96a3-2cb1af4caf9d", 00:09:37.350 "assigned_rate_limits": { 00:09:37.350 "rw_ios_per_sec": 0, 00:09:37.350 "rw_mbytes_per_sec": 0, 00:09:37.350 "r_mbytes_per_sec": 0, 00:09:37.350 "w_mbytes_per_sec": 0 00:09:37.350 }, 00:09:37.350 "claimed": false, 00:09:37.350 "zoned": false, 00:09:37.350 "supported_io_types": { 00:09:37.350 "read": true, 00:09:37.350 "write": true, 00:09:37.350 "unmap": true, 00:09:37.350 "flush": true, 00:09:37.350 "reset": true, 00:09:37.350 "nvme_admin": false, 00:09:37.350 "nvme_io": false, 00:09:37.350 "nvme_io_md": false, 00:09:37.350 "write_zeroes": true, 00:09:37.350 "zcopy": true, 00:09:37.350 "get_zone_info": false, 00:09:37.350 "zone_management": false, 00:09:37.350 "zone_append": false, 00:09:37.350 "compare": false, 00:09:37.350 "compare_and_write": false, 00:09:37.350 "abort": true, 00:09:37.350 "seek_hole": false, 00:09:37.350 "seek_data": false, 00:09:37.350 "copy": true, 00:09:37.350 "nvme_iov_md": false 00:09:37.350 }, 00:09:37.350 "memory_domains": [ 00:09:37.350 { 00:09:37.350 "dma_device_id": "system", 00:09:37.350 "dma_device_type": 1 00:09:37.350 }, 00:09:37.350 { 00:09:37.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.350 "dma_device_type": 2 00:09:37.350 } 00:09:37.350 ], 00:09:37.350 "driver_specific": {} 00:09:37.350 } 00:09:37.350 ] 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.350 BaseBdev3 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.350 [ 00:09:37.350 { 00:09:37.350 "name": "BaseBdev3", 00:09:37.350 "aliases": [ 00:09:37.350 "2f928397-9586-4319-8fdc-17ae225d1aaa" 00:09:37.350 ], 00:09:37.350 "product_name": "Malloc disk", 00:09:37.350 "block_size": 512, 00:09:37.350 "num_blocks": 65536, 00:09:37.350 "uuid": "2f928397-9586-4319-8fdc-17ae225d1aaa", 00:09:37.350 "assigned_rate_limits": { 00:09:37.350 "rw_ios_per_sec": 0, 00:09:37.350 "rw_mbytes_per_sec": 0, 00:09:37.350 "r_mbytes_per_sec": 0, 00:09:37.350 "w_mbytes_per_sec": 0 00:09:37.350 }, 00:09:37.350 "claimed": false, 00:09:37.350 "zoned": false, 00:09:37.350 "supported_io_types": { 00:09:37.350 "read": true, 00:09:37.350 "write": true, 00:09:37.350 "unmap": true, 00:09:37.350 "flush": true, 00:09:37.350 "reset": true, 00:09:37.350 "nvme_admin": false, 00:09:37.350 "nvme_io": false, 00:09:37.350 "nvme_io_md": false, 00:09:37.350 "write_zeroes": true, 00:09:37.350 "zcopy": true, 00:09:37.350 "get_zone_info": false, 00:09:37.350 "zone_management": false, 00:09:37.350 "zone_append": false, 00:09:37.350 "compare": false, 00:09:37.350 "compare_and_write": false, 00:09:37.350 "abort": true, 00:09:37.350 "seek_hole": false, 00:09:37.350 "seek_data": false, 00:09:37.350 "copy": true, 00:09:37.350 "nvme_iov_md": false 00:09:37.350 }, 00:09:37.350 "memory_domains": [ 00:09:37.350 { 00:09:37.350 "dma_device_id": "system", 00:09:37.350 "dma_device_type": 1 00:09:37.350 }, 00:09:37.350 { 00:09:37.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.350 "dma_device_type": 2 00:09:37.350 } 00:09:37.350 ], 00:09:37.350 "driver_specific": {} 00:09:37.350 } 00:09:37.350 ] 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.350 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.351 [2024-12-06 09:47:02.516052] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:37.351 [2024-12-06 09:47:02.516105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:37.351 [2024-12-06 09:47:02.516123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:37.351 [2024-12-06 09:47:02.517889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:37.351 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.351 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:37.351 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.351 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.351 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.351 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.351 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.351 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.351 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.351 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.351 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.351 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.351 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.351 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.351 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.351 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.351 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.351 "name": "Existed_Raid", 00:09:37.351 "uuid": "fb064b8e-5c57-4305-8572-a68f2d21a089", 00:09:37.351 "strip_size_kb": 0, 00:09:37.351 "state": "configuring", 00:09:37.351 "raid_level": "raid1", 00:09:37.351 "superblock": true, 00:09:37.351 "num_base_bdevs": 3, 00:09:37.351 "num_base_bdevs_discovered": 2, 00:09:37.351 "num_base_bdevs_operational": 3, 00:09:37.351 "base_bdevs_list": [ 00:09:37.351 { 00:09:37.351 "name": "BaseBdev1", 00:09:37.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.351 "is_configured": false, 00:09:37.351 "data_offset": 0, 00:09:37.351 "data_size": 0 00:09:37.351 }, 00:09:37.351 { 00:09:37.351 "name": "BaseBdev2", 00:09:37.351 "uuid": "4756cbe4-3394-4f33-96a3-2cb1af4caf9d", 00:09:37.351 "is_configured": true, 00:09:37.351 "data_offset": 2048, 00:09:37.351 "data_size": 63488 00:09:37.351 }, 00:09:37.351 { 00:09:37.351 "name": "BaseBdev3", 00:09:37.351 "uuid": "2f928397-9586-4319-8fdc-17ae225d1aaa", 00:09:37.351 "is_configured": true, 00:09:37.351 "data_offset": 2048, 00:09:37.351 "data_size": 63488 00:09:37.351 } 00:09:37.351 ] 00:09:37.351 }' 00:09:37.351 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.351 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.919 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:37.919 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.919 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.919 [2024-12-06 09:47:02.911416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:37.919 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.919 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:37.919 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.919 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.919 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.919 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.919 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.919 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.919 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.919 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.919 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.919 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.919 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.919 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.919 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.919 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.919 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.919 "name": "Existed_Raid", 00:09:37.919 "uuid": "fb064b8e-5c57-4305-8572-a68f2d21a089", 00:09:37.919 "strip_size_kb": 0, 00:09:37.919 "state": "configuring", 00:09:37.919 "raid_level": "raid1", 00:09:37.919 "superblock": true, 00:09:37.919 "num_base_bdevs": 3, 00:09:37.919 "num_base_bdevs_discovered": 1, 00:09:37.919 "num_base_bdevs_operational": 3, 00:09:37.919 "base_bdevs_list": [ 00:09:37.919 { 00:09:37.919 "name": "BaseBdev1", 00:09:37.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.919 "is_configured": false, 00:09:37.919 "data_offset": 0, 00:09:37.919 "data_size": 0 00:09:37.919 }, 00:09:37.919 { 00:09:37.919 "name": null, 00:09:37.919 "uuid": "4756cbe4-3394-4f33-96a3-2cb1af4caf9d", 00:09:37.919 "is_configured": false, 00:09:37.919 "data_offset": 0, 00:09:37.919 "data_size": 63488 00:09:37.919 }, 00:09:37.919 { 00:09:37.919 "name": "BaseBdev3", 00:09:37.919 "uuid": "2f928397-9586-4319-8fdc-17ae225d1aaa", 00:09:37.919 "is_configured": true, 00:09:37.919 "data_offset": 2048, 00:09:37.919 "data_size": 63488 00:09:37.919 } 00:09:37.919 ] 00:09:37.919 }' 00:09:37.920 09:47:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.920 09:47:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.179 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.179 09:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.179 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:38.179 09:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.179 09:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.179 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:38.179 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:38.179 09:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.179 09:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.179 [2024-12-06 09:47:03.431471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.179 BaseBdev1 00:09:38.179 09:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.179 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:38.179 09:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:38.179 09:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:38.179 09:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:38.179 09:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:38.179 09:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:38.179 09:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:38.179 09:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.179 09:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.179 09:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.179 09:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:38.179 09:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.179 09:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.438 [ 00:09:38.438 { 00:09:38.438 "name": "BaseBdev1", 00:09:38.438 "aliases": [ 00:09:38.438 "3cc9bded-b597-42c6-acd3-e83ecfac2da7" 00:09:38.438 ], 00:09:38.438 "product_name": "Malloc disk", 00:09:38.438 "block_size": 512, 00:09:38.438 "num_blocks": 65536, 00:09:38.438 "uuid": "3cc9bded-b597-42c6-acd3-e83ecfac2da7", 00:09:38.438 "assigned_rate_limits": { 00:09:38.438 "rw_ios_per_sec": 0, 00:09:38.438 "rw_mbytes_per_sec": 0, 00:09:38.438 "r_mbytes_per_sec": 0, 00:09:38.438 "w_mbytes_per_sec": 0 00:09:38.438 }, 00:09:38.438 "claimed": true, 00:09:38.438 "claim_type": "exclusive_write", 00:09:38.438 "zoned": false, 00:09:38.438 "supported_io_types": { 00:09:38.438 "read": true, 00:09:38.438 "write": true, 00:09:38.438 "unmap": true, 00:09:38.438 "flush": true, 00:09:38.438 "reset": true, 00:09:38.438 "nvme_admin": false, 00:09:38.438 "nvme_io": false, 00:09:38.438 "nvme_io_md": false, 00:09:38.438 "write_zeroes": true, 00:09:38.438 "zcopy": true, 00:09:38.438 "get_zone_info": false, 00:09:38.438 "zone_management": false, 00:09:38.438 "zone_append": false, 00:09:38.438 "compare": false, 00:09:38.438 "compare_and_write": false, 00:09:38.438 "abort": true, 00:09:38.438 "seek_hole": false, 00:09:38.438 "seek_data": false, 00:09:38.438 "copy": true, 00:09:38.438 "nvme_iov_md": false 00:09:38.438 }, 00:09:38.438 "memory_domains": [ 00:09:38.438 { 00:09:38.438 "dma_device_id": "system", 00:09:38.438 "dma_device_type": 1 00:09:38.438 }, 00:09:38.438 { 00:09:38.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.438 "dma_device_type": 2 00:09:38.438 } 00:09:38.438 ], 00:09:38.438 "driver_specific": {} 00:09:38.438 } 00:09:38.438 ] 00:09:38.438 09:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.438 09:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:38.438 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:38.438 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.438 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.438 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.438 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.438 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.438 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.438 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.438 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.438 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.438 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.438 09:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.438 09:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.438 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.438 09:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.438 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.438 "name": "Existed_Raid", 00:09:38.438 "uuid": "fb064b8e-5c57-4305-8572-a68f2d21a089", 00:09:38.438 "strip_size_kb": 0, 00:09:38.438 "state": "configuring", 00:09:38.438 "raid_level": "raid1", 00:09:38.438 "superblock": true, 00:09:38.438 "num_base_bdevs": 3, 00:09:38.438 "num_base_bdevs_discovered": 2, 00:09:38.438 "num_base_bdevs_operational": 3, 00:09:38.438 "base_bdevs_list": [ 00:09:38.438 { 00:09:38.438 "name": "BaseBdev1", 00:09:38.438 "uuid": "3cc9bded-b597-42c6-acd3-e83ecfac2da7", 00:09:38.438 "is_configured": true, 00:09:38.438 "data_offset": 2048, 00:09:38.438 "data_size": 63488 00:09:38.438 }, 00:09:38.438 { 00:09:38.438 "name": null, 00:09:38.438 "uuid": "4756cbe4-3394-4f33-96a3-2cb1af4caf9d", 00:09:38.439 "is_configured": false, 00:09:38.439 "data_offset": 0, 00:09:38.439 "data_size": 63488 00:09:38.439 }, 00:09:38.439 { 00:09:38.439 "name": "BaseBdev3", 00:09:38.439 "uuid": "2f928397-9586-4319-8fdc-17ae225d1aaa", 00:09:38.439 "is_configured": true, 00:09:38.439 "data_offset": 2048, 00:09:38.439 "data_size": 63488 00:09:38.439 } 00:09:38.439 ] 00:09:38.439 }' 00:09:38.439 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.439 09:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.698 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.698 09:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.698 09:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.698 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:38.698 09:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.698 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:38.698 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:38.698 09:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.698 09:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.698 [2024-12-06 09:47:03.954639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:38.698 09:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.698 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:38.698 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.698 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.698 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.698 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.698 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.698 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.698 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.698 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.698 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.698 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.698 09:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.698 09:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.698 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.957 09:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.957 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.957 "name": "Existed_Raid", 00:09:38.957 "uuid": "fb064b8e-5c57-4305-8572-a68f2d21a089", 00:09:38.957 "strip_size_kb": 0, 00:09:38.957 "state": "configuring", 00:09:38.957 "raid_level": "raid1", 00:09:38.957 "superblock": true, 00:09:38.957 "num_base_bdevs": 3, 00:09:38.957 "num_base_bdevs_discovered": 1, 00:09:38.957 "num_base_bdevs_operational": 3, 00:09:38.957 "base_bdevs_list": [ 00:09:38.957 { 00:09:38.957 "name": "BaseBdev1", 00:09:38.957 "uuid": "3cc9bded-b597-42c6-acd3-e83ecfac2da7", 00:09:38.957 "is_configured": true, 00:09:38.957 "data_offset": 2048, 00:09:38.957 "data_size": 63488 00:09:38.957 }, 00:09:38.957 { 00:09:38.957 "name": null, 00:09:38.957 "uuid": "4756cbe4-3394-4f33-96a3-2cb1af4caf9d", 00:09:38.957 "is_configured": false, 00:09:38.957 "data_offset": 0, 00:09:38.957 "data_size": 63488 00:09:38.957 }, 00:09:38.957 { 00:09:38.957 "name": null, 00:09:38.957 "uuid": "2f928397-9586-4319-8fdc-17ae225d1aaa", 00:09:38.957 "is_configured": false, 00:09:38.957 "data_offset": 0, 00:09:38.957 "data_size": 63488 00:09:38.957 } 00:09:38.957 ] 00:09:38.957 }' 00:09:38.957 09:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.957 09:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.217 09:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.217 09:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.217 09:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.217 09:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:39.217 09:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.217 09:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:39.217 09:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:39.217 09:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.217 09:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.217 [2024-12-06 09:47:04.433857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:39.217 09:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.217 09:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:39.217 09:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.217 09:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.217 09:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.217 09:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.217 09:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.217 09:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.217 09:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.217 09:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.217 09:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.217 09:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.217 09:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.217 09:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.217 09:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.217 09:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.477 09:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.477 "name": "Existed_Raid", 00:09:39.477 "uuid": "fb064b8e-5c57-4305-8572-a68f2d21a089", 00:09:39.477 "strip_size_kb": 0, 00:09:39.477 "state": "configuring", 00:09:39.477 "raid_level": "raid1", 00:09:39.477 "superblock": true, 00:09:39.477 "num_base_bdevs": 3, 00:09:39.477 "num_base_bdevs_discovered": 2, 00:09:39.477 "num_base_bdevs_operational": 3, 00:09:39.477 "base_bdevs_list": [ 00:09:39.477 { 00:09:39.477 "name": "BaseBdev1", 00:09:39.477 "uuid": "3cc9bded-b597-42c6-acd3-e83ecfac2da7", 00:09:39.477 "is_configured": true, 00:09:39.477 "data_offset": 2048, 00:09:39.477 "data_size": 63488 00:09:39.477 }, 00:09:39.477 { 00:09:39.477 "name": null, 00:09:39.477 "uuid": "4756cbe4-3394-4f33-96a3-2cb1af4caf9d", 00:09:39.477 "is_configured": false, 00:09:39.477 "data_offset": 0, 00:09:39.477 "data_size": 63488 00:09:39.477 }, 00:09:39.477 { 00:09:39.477 "name": "BaseBdev3", 00:09:39.477 "uuid": "2f928397-9586-4319-8fdc-17ae225d1aaa", 00:09:39.477 "is_configured": true, 00:09:39.477 "data_offset": 2048, 00:09:39.477 "data_size": 63488 00:09:39.477 } 00:09:39.477 ] 00:09:39.477 }' 00:09:39.477 09:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.477 09:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.736 09:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.736 09:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.736 09:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.736 09:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:39.736 09:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.736 09:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:39.736 09:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:39.736 09:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.736 09:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.736 [2024-12-06 09:47:04.937014] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:39.996 09:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.996 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:39.996 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.996 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.996 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.996 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.996 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.996 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.996 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.996 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.996 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.996 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.996 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.996 09:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.996 09:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.996 09:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.996 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.996 "name": "Existed_Raid", 00:09:39.996 "uuid": "fb064b8e-5c57-4305-8572-a68f2d21a089", 00:09:39.996 "strip_size_kb": 0, 00:09:39.996 "state": "configuring", 00:09:39.996 "raid_level": "raid1", 00:09:39.996 "superblock": true, 00:09:39.996 "num_base_bdevs": 3, 00:09:39.996 "num_base_bdevs_discovered": 1, 00:09:39.996 "num_base_bdevs_operational": 3, 00:09:39.996 "base_bdevs_list": [ 00:09:39.996 { 00:09:39.996 "name": null, 00:09:39.996 "uuid": "3cc9bded-b597-42c6-acd3-e83ecfac2da7", 00:09:39.996 "is_configured": false, 00:09:39.996 "data_offset": 0, 00:09:39.996 "data_size": 63488 00:09:39.996 }, 00:09:39.996 { 00:09:39.996 "name": null, 00:09:39.996 "uuid": "4756cbe4-3394-4f33-96a3-2cb1af4caf9d", 00:09:39.996 "is_configured": false, 00:09:39.996 "data_offset": 0, 00:09:39.996 "data_size": 63488 00:09:39.996 }, 00:09:39.996 { 00:09:39.996 "name": "BaseBdev3", 00:09:39.996 "uuid": "2f928397-9586-4319-8fdc-17ae225d1aaa", 00:09:39.996 "is_configured": true, 00:09:39.996 "data_offset": 2048, 00:09:39.996 "data_size": 63488 00:09:39.996 } 00:09:39.996 ] 00:09:39.996 }' 00:09:39.996 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.996 09:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.254 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.254 09:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.254 09:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.254 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:40.254 09:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.513 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:40.513 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:40.513 09:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.513 09:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.513 [2024-12-06 09:47:05.548726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:40.513 09:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.513 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:40.513 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.513 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.513 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.513 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.513 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.513 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.513 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.513 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.513 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.513 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.513 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.513 09:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.513 09:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.513 09:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.513 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.513 "name": "Existed_Raid", 00:09:40.513 "uuid": "fb064b8e-5c57-4305-8572-a68f2d21a089", 00:09:40.513 "strip_size_kb": 0, 00:09:40.513 "state": "configuring", 00:09:40.513 "raid_level": "raid1", 00:09:40.513 "superblock": true, 00:09:40.513 "num_base_bdevs": 3, 00:09:40.513 "num_base_bdevs_discovered": 2, 00:09:40.513 "num_base_bdevs_operational": 3, 00:09:40.513 "base_bdevs_list": [ 00:09:40.513 { 00:09:40.513 "name": null, 00:09:40.513 "uuid": "3cc9bded-b597-42c6-acd3-e83ecfac2da7", 00:09:40.513 "is_configured": false, 00:09:40.513 "data_offset": 0, 00:09:40.513 "data_size": 63488 00:09:40.513 }, 00:09:40.513 { 00:09:40.513 "name": "BaseBdev2", 00:09:40.513 "uuid": "4756cbe4-3394-4f33-96a3-2cb1af4caf9d", 00:09:40.513 "is_configured": true, 00:09:40.513 "data_offset": 2048, 00:09:40.513 "data_size": 63488 00:09:40.513 }, 00:09:40.513 { 00:09:40.513 "name": "BaseBdev3", 00:09:40.513 "uuid": "2f928397-9586-4319-8fdc-17ae225d1aaa", 00:09:40.513 "is_configured": true, 00:09:40.513 "data_offset": 2048, 00:09:40.513 "data_size": 63488 00:09:40.513 } 00:09:40.513 ] 00:09:40.513 }' 00:09:40.513 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.513 09:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.776 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.776 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:40.776 09:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.776 09:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.776 09:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.776 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:40.776 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:40.776 09:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.776 09:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.776 09:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.776 09:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.776 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3cc9bded-b597-42c6-acd3-e83ecfac2da7 00:09:40.776 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.776 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.776 [2024-12-06 09:47:06.043688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:40.776 [2024-12-06 09:47:06.043979] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:40.776 [2024-12-06 09:47:06.043993] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:40.776 [2024-12-06 09:47:06.044304] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:40.776 [2024-12-06 09:47:06.044485] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:40.776 [2024-12-06 09:47:06.044506] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:40.776 NewBaseBdev 00:09:40.776 [2024-12-06 09:47:06.044662] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.776 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.776 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:40.776 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:40.776 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:40.776 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:41.047 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:41.047 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:41.047 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:41.047 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.047 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.047 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.047 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:41.047 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.047 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.047 [ 00:09:41.047 { 00:09:41.047 "name": "NewBaseBdev", 00:09:41.047 "aliases": [ 00:09:41.047 "3cc9bded-b597-42c6-acd3-e83ecfac2da7" 00:09:41.047 ], 00:09:41.047 "product_name": "Malloc disk", 00:09:41.047 "block_size": 512, 00:09:41.047 "num_blocks": 65536, 00:09:41.047 "uuid": "3cc9bded-b597-42c6-acd3-e83ecfac2da7", 00:09:41.047 "assigned_rate_limits": { 00:09:41.047 "rw_ios_per_sec": 0, 00:09:41.047 "rw_mbytes_per_sec": 0, 00:09:41.047 "r_mbytes_per_sec": 0, 00:09:41.047 "w_mbytes_per_sec": 0 00:09:41.047 }, 00:09:41.047 "claimed": true, 00:09:41.047 "claim_type": "exclusive_write", 00:09:41.047 "zoned": false, 00:09:41.047 "supported_io_types": { 00:09:41.047 "read": true, 00:09:41.047 "write": true, 00:09:41.047 "unmap": true, 00:09:41.047 "flush": true, 00:09:41.047 "reset": true, 00:09:41.047 "nvme_admin": false, 00:09:41.047 "nvme_io": false, 00:09:41.047 "nvme_io_md": false, 00:09:41.047 "write_zeroes": true, 00:09:41.047 "zcopy": true, 00:09:41.047 "get_zone_info": false, 00:09:41.047 "zone_management": false, 00:09:41.047 "zone_append": false, 00:09:41.047 "compare": false, 00:09:41.047 "compare_and_write": false, 00:09:41.047 "abort": true, 00:09:41.047 "seek_hole": false, 00:09:41.047 "seek_data": false, 00:09:41.047 "copy": true, 00:09:41.047 "nvme_iov_md": false 00:09:41.047 }, 00:09:41.047 "memory_domains": [ 00:09:41.047 { 00:09:41.047 "dma_device_id": "system", 00:09:41.047 "dma_device_type": 1 00:09:41.047 }, 00:09:41.047 { 00:09:41.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.047 "dma_device_type": 2 00:09:41.047 } 00:09:41.047 ], 00:09:41.047 "driver_specific": {} 00:09:41.047 } 00:09:41.047 ] 00:09:41.047 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.047 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:41.047 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:41.047 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.047 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.047 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.047 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.047 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.047 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.047 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.047 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.047 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.047 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.047 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.047 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.047 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.047 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.047 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.047 "name": "Existed_Raid", 00:09:41.047 "uuid": "fb064b8e-5c57-4305-8572-a68f2d21a089", 00:09:41.047 "strip_size_kb": 0, 00:09:41.047 "state": "online", 00:09:41.047 "raid_level": "raid1", 00:09:41.047 "superblock": true, 00:09:41.047 "num_base_bdevs": 3, 00:09:41.047 "num_base_bdevs_discovered": 3, 00:09:41.047 "num_base_bdevs_operational": 3, 00:09:41.047 "base_bdevs_list": [ 00:09:41.047 { 00:09:41.047 "name": "NewBaseBdev", 00:09:41.047 "uuid": "3cc9bded-b597-42c6-acd3-e83ecfac2da7", 00:09:41.047 "is_configured": true, 00:09:41.047 "data_offset": 2048, 00:09:41.047 "data_size": 63488 00:09:41.047 }, 00:09:41.047 { 00:09:41.047 "name": "BaseBdev2", 00:09:41.047 "uuid": "4756cbe4-3394-4f33-96a3-2cb1af4caf9d", 00:09:41.047 "is_configured": true, 00:09:41.047 "data_offset": 2048, 00:09:41.047 "data_size": 63488 00:09:41.047 }, 00:09:41.047 { 00:09:41.047 "name": "BaseBdev3", 00:09:41.047 "uuid": "2f928397-9586-4319-8fdc-17ae225d1aaa", 00:09:41.047 "is_configured": true, 00:09:41.047 "data_offset": 2048, 00:09:41.047 "data_size": 63488 00:09:41.047 } 00:09:41.047 ] 00:09:41.047 }' 00:09:41.047 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.047 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.307 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:41.307 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:41.307 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:41.307 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:41.307 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:41.307 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:41.307 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:41.307 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:41.307 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.307 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.307 [2024-12-06 09:47:06.523230] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.307 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.307 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:41.307 "name": "Existed_Raid", 00:09:41.307 "aliases": [ 00:09:41.307 "fb064b8e-5c57-4305-8572-a68f2d21a089" 00:09:41.307 ], 00:09:41.307 "product_name": "Raid Volume", 00:09:41.307 "block_size": 512, 00:09:41.307 "num_blocks": 63488, 00:09:41.307 "uuid": "fb064b8e-5c57-4305-8572-a68f2d21a089", 00:09:41.307 "assigned_rate_limits": { 00:09:41.307 "rw_ios_per_sec": 0, 00:09:41.307 "rw_mbytes_per_sec": 0, 00:09:41.307 "r_mbytes_per_sec": 0, 00:09:41.307 "w_mbytes_per_sec": 0 00:09:41.307 }, 00:09:41.307 "claimed": false, 00:09:41.307 "zoned": false, 00:09:41.307 "supported_io_types": { 00:09:41.307 "read": true, 00:09:41.307 "write": true, 00:09:41.307 "unmap": false, 00:09:41.307 "flush": false, 00:09:41.307 "reset": true, 00:09:41.307 "nvme_admin": false, 00:09:41.307 "nvme_io": false, 00:09:41.307 "nvme_io_md": false, 00:09:41.307 "write_zeroes": true, 00:09:41.307 "zcopy": false, 00:09:41.307 "get_zone_info": false, 00:09:41.307 "zone_management": false, 00:09:41.307 "zone_append": false, 00:09:41.307 "compare": false, 00:09:41.307 "compare_and_write": false, 00:09:41.307 "abort": false, 00:09:41.307 "seek_hole": false, 00:09:41.307 "seek_data": false, 00:09:41.307 "copy": false, 00:09:41.307 "nvme_iov_md": false 00:09:41.307 }, 00:09:41.307 "memory_domains": [ 00:09:41.307 { 00:09:41.307 "dma_device_id": "system", 00:09:41.307 "dma_device_type": 1 00:09:41.307 }, 00:09:41.307 { 00:09:41.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.307 "dma_device_type": 2 00:09:41.307 }, 00:09:41.307 { 00:09:41.307 "dma_device_id": "system", 00:09:41.307 "dma_device_type": 1 00:09:41.307 }, 00:09:41.307 { 00:09:41.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.307 "dma_device_type": 2 00:09:41.307 }, 00:09:41.307 { 00:09:41.307 "dma_device_id": "system", 00:09:41.307 "dma_device_type": 1 00:09:41.307 }, 00:09:41.307 { 00:09:41.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.307 "dma_device_type": 2 00:09:41.307 } 00:09:41.307 ], 00:09:41.307 "driver_specific": { 00:09:41.307 "raid": { 00:09:41.307 "uuid": "fb064b8e-5c57-4305-8572-a68f2d21a089", 00:09:41.307 "strip_size_kb": 0, 00:09:41.307 "state": "online", 00:09:41.307 "raid_level": "raid1", 00:09:41.307 "superblock": true, 00:09:41.307 "num_base_bdevs": 3, 00:09:41.307 "num_base_bdevs_discovered": 3, 00:09:41.307 "num_base_bdevs_operational": 3, 00:09:41.307 "base_bdevs_list": [ 00:09:41.307 { 00:09:41.307 "name": "NewBaseBdev", 00:09:41.307 "uuid": "3cc9bded-b597-42c6-acd3-e83ecfac2da7", 00:09:41.307 "is_configured": true, 00:09:41.307 "data_offset": 2048, 00:09:41.307 "data_size": 63488 00:09:41.307 }, 00:09:41.307 { 00:09:41.307 "name": "BaseBdev2", 00:09:41.307 "uuid": "4756cbe4-3394-4f33-96a3-2cb1af4caf9d", 00:09:41.307 "is_configured": true, 00:09:41.307 "data_offset": 2048, 00:09:41.307 "data_size": 63488 00:09:41.307 }, 00:09:41.307 { 00:09:41.307 "name": "BaseBdev3", 00:09:41.307 "uuid": "2f928397-9586-4319-8fdc-17ae225d1aaa", 00:09:41.307 "is_configured": true, 00:09:41.307 "data_offset": 2048, 00:09:41.307 "data_size": 63488 00:09:41.307 } 00:09:41.307 ] 00:09:41.307 } 00:09:41.307 } 00:09:41.307 }' 00:09:41.308 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:41.568 BaseBdev2 00:09:41.568 BaseBdev3' 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.568 [2024-12-06 09:47:06.822389] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:41.568 [2024-12-06 09:47:06.822481] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:41.568 [2024-12-06 09:47:06.822592] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:41.568 [2024-12-06 09:47:06.822933] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:41.568 [2024-12-06 09:47:06.822996] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 67968 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 67968 ']' 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 67968 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:41.568 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.828 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67968 00:09:41.828 killing process with pid 67968 00:09:41.828 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:41.828 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:41.828 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67968' 00:09:41.828 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 67968 00:09:41.828 [2024-12-06 09:47:06.870850] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:41.828 09:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 67968 00:09:42.088 [2024-12-06 09:47:07.170338] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:43.028 09:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:43.028 ************************************ 00:09:43.028 END TEST raid_state_function_test_sb 00:09:43.028 ************************************ 00:09:43.028 00:09:43.029 real 0m10.377s 00:09:43.029 user 0m16.503s 00:09:43.029 sys 0m1.796s 00:09:43.029 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.029 09:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.289 09:47:08 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:43.289 09:47:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:43.289 09:47:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.289 09:47:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:43.289 ************************************ 00:09:43.289 START TEST raid_superblock_test 00:09:43.289 ************************************ 00:09:43.289 09:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:09:43.289 09:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:43.289 09:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:43.289 09:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:43.289 09:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:43.289 09:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:43.289 09:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:43.289 09:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:43.289 09:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:43.289 09:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:43.289 09:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:43.289 09:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:43.289 09:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:43.289 09:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:43.289 09:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:43.289 09:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:43.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.289 09:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68588 00:09:43.289 09:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68588 00:09:43.289 09:47:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:43.289 09:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68588 ']' 00:09:43.289 09:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.289 09:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.289 09:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.289 09:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.289 09:47:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.289 [2024-12-06 09:47:08.448817] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:09:43.289 [2024-12-06 09:47:08.448940] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68588 ] 00:09:43.550 [2024-12-06 09:47:08.608275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.550 [2024-12-06 09:47:08.722897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.809 [2024-12-06 09:47:08.921184] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.809 [2024-12-06 09:47:08.921325] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.066 09:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:44.066 09:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:44.066 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:44.066 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:44.066 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:44.066 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:44.066 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:44.066 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:44.066 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:44.066 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:44.066 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:44.066 09:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.066 09:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.066 malloc1 00:09:44.066 09:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.066 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:44.066 09:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.066 09:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.066 [2024-12-06 09:47:09.332721] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:44.066 [2024-12-06 09:47:09.332845] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.066 [2024-12-06 09:47:09.332892] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:44.066 [2024-12-06 09:47:09.332927] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.066 [2024-12-06 09:47:09.335376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.066 [2024-12-06 09:47:09.335459] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:44.066 pt1 00:09:44.066 09:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.067 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:44.324 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:44.324 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:44.324 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:44.324 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:44.324 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:44.324 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:44.324 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:44.324 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:44.324 09:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.324 09:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.324 malloc2 00:09:44.324 09:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.324 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:44.324 09:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.324 09:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.324 [2024-12-06 09:47:09.387869] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:44.324 [2024-12-06 09:47:09.387989] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.324 [2024-12-06 09:47:09.388039] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:44.324 [2024-12-06 09:47:09.388079] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.324 [2024-12-06 09:47:09.390363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.324 [2024-12-06 09:47:09.390432] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:44.324 pt2 00:09:44.324 09:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.324 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:44.324 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:44.324 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:44.324 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:44.324 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:44.324 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:44.324 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:44.324 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:44.324 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:44.324 09:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.324 09:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.325 malloc3 00:09:44.325 09:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.325 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:44.325 09:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.325 09:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.325 [2024-12-06 09:47:09.457041] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:44.325 [2024-12-06 09:47:09.457163] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.325 [2024-12-06 09:47:09.457210] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:44.325 [2024-12-06 09:47:09.457266] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.325 [2024-12-06 09:47:09.459552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.325 [2024-12-06 09:47:09.459622] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:44.325 pt3 00:09:44.325 09:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.325 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:44.325 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:44.325 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:44.325 09:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.325 09:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.325 [2024-12-06 09:47:09.469068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:44.325 [2024-12-06 09:47:09.470906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:44.325 [2024-12-06 09:47:09.471024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:44.325 [2024-12-06 09:47:09.471205] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:44.325 [2024-12-06 09:47:09.471259] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:44.325 [2024-12-06 09:47:09.471503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:44.325 [2024-12-06 09:47:09.471717] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:44.325 [2024-12-06 09:47:09.471785] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:44.325 [2024-12-06 09:47:09.471982] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.325 09:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.325 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:44.325 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:44.325 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:44.325 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.325 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.325 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.325 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.325 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.325 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.325 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.325 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.325 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:44.325 09:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.325 09:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.325 09:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.325 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.325 "name": "raid_bdev1", 00:09:44.325 "uuid": "d33990f3-1636-47d9-9082-ad6c1cac6a7d", 00:09:44.325 "strip_size_kb": 0, 00:09:44.325 "state": "online", 00:09:44.325 "raid_level": "raid1", 00:09:44.325 "superblock": true, 00:09:44.325 "num_base_bdevs": 3, 00:09:44.325 "num_base_bdevs_discovered": 3, 00:09:44.325 "num_base_bdevs_operational": 3, 00:09:44.325 "base_bdevs_list": [ 00:09:44.325 { 00:09:44.325 "name": "pt1", 00:09:44.325 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:44.325 "is_configured": true, 00:09:44.325 "data_offset": 2048, 00:09:44.325 "data_size": 63488 00:09:44.325 }, 00:09:44.325 { 00:09:44.325 "name": "pt2", 00:09:44.325 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.325 "is_configured": true, 00:09:44.325 "data_offset": 2048, 00:09:44.325 "data_size": 63488 00:09:44.325 }, 00:09:44.325 { 00:09:44.325 "name": "pt3", 00:09:44.325 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:44.325 "is_configured": true, 00:09:44.325 "data_offset": 2048, 00:09:44.325 "data_size": 63488 00:09:44.325 } 00:09:44.325 ] 00:09:44.325 }' 00:09:44.325 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.325 09:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.891 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:44.891 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:44.891 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:44.891 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:44.891 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:44.891 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:44.891 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:44.891 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:44.891 09:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.891 09:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.891 [2024-12-06 09:47:09.952566] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:44.891 09:47:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.891 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:44.891 "name": "raid_bdev1", 00:09:44.891 "aliases": [ 00:09:44.891 "d33990f3-1636-47d9-9082-ad6c1cac6a7d" 00:09:44.891 ], 00:09:44.891 "product_name": "Raid Volume", 00:09:44.891 "block_size": 512, 00:09:44.891 "num_blocks": 63488, 00:09:44.891 "uuid": "d33990f3-1636-47d9-9082-ad6c1cac6a7d", 00:09:44.891 "assigned_rate_limits": { 00:09:44.891 "rw_ios_per_sec": 0, 00:09:44.891 "rw_mbytes_per_sec": 0, 00:09:44.891 "r_mbytes_per_sec": 0, 00:09:44.891 "w_mbytes_per_sec": 0 00:09:44.891 }, 00:09:44.891 "claimed": false, 00:09:44.891 "zoned": false, 00:09:44.891 "supported_io_types": { 00:09:44.891 "read": true, 00:09:44.891 "write": true, 00:09:44.891 "unmap": false, 00:09:44.891 "flush": false, 00:09:44.891 "reset": true, 00:09:44.891 "nvme_admin": false, 00:09:44.891 "nvme_io": false, 00:09:44.891 "nvme_io_md": false, 00:09:44.891 "write_zeroes": true, 00:09:44.891 "zcopy": false, 00:09:44.891 "get_zone_info": false, 00:09:44.891 "zone_management": false, 00:09:44.891 "zone_append": false, 00:09:44.891 "compare": false, 00:09:44.891 "compare_and_write": false, 00:09:44.891 "abort": false, 00:09:44.891 "seek_hole": false, 00:09:44.891 "seek_data": false, 00:09:44.891 "copy": false, 00:09:44.891 "nvme_iov_md": false 00:09:44.891 }, 00:09:44.891 "memory_domains": [ 00:09:44.891 { 00:09:44.891 "dma_device_id": "system", 00:09:44.891 "dma_device_type": 1 00:09:44.891 }, 00:09:44.891 { 00:09:44.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.891 "dma_device_type": 2 00:09:44.891 }, 00:09:44.891 { 00:09:44.891 "dma_device_id": "system", 00:09:44.891 "dma_device_type": 1 00:09:44.891 }, 00:09:44.891 { 00:09:44.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.891 "dma_device_type": 2 00:09:44.891 }, 00:09:44.891 { 00:09:44.891 "dma_device_id": "system", 00:09:44.891 "dma_device_type": 1 00:09:44.891 }, 00:09:44.891 { 00:09:44.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.891 "dma_device_type": 2 00:09:44.891 } 00:09:44.891 ], 00:09:44.891 "driver_specific": { 00:09:44.891 "raid": { 00:09:44.891 "uuid": "d33990f3-1636-47d9-9082-ad6c1cac6a7d", 00:09:44.891 "strip_size_kb": 0, 00:09:44.891 "state": "online", 00:09:44.891 "raid_level": "raid1", 00:09:44.891 "superblock": true, 00:09:44.891 "num_base_bdevs": 3, 00:09:44.891 "num_base_bdevs_discovered": 3, 00:09:44.891 "num_base_bdevs_operational": 3, 00:09:44.891 "base_bdevs_list": [ 00:09:44.891 { 00:09:44.891 "name": "pt1", 00:09:44.891 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:44.891 "is_configured": true, 00:09:44.891 "data_offset": 2048, 00:09:44.891 "data_size": 63488 00:09:44.891 }, 00:09:44.891 { 00:09:44.891 "name": "pt2", 00:09:44.891 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.891 "is_configured": true, 00:09:44.891 "data_offset": 2048, 00:09:44.891 "data_size": 63488 00:09:44.891 }, 00:09:44.891 { 00:09:44.891 "name": "pt3", 00:09:44.891 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:44.891 "is_configured": true, 00:09:44.891 "data_offset": 2048, 00:09:44.891 "data_size": 63488 00:09:44.891 } 00:09:44.891 ] 00:09:44.891 } 00:09:44.891 } 00:09:44.891 }' 00:09:44.891 09:47:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:44.891 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:44.891 pt2 00:09:44.891 pt3' 00:09:44.891 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.892 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:44.892 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.892 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.892 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:44.892 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.892 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.892 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.892 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.892 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.892 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.892 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.892 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:44.892 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.892 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.892 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.892 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.892 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.892 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.892 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.892 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:44.892 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.892 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.151 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.151 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.151 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.151 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:45.151 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:45.151 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.151 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.151 [2024-12-06 09:47:10.188076] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:45.151 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.151 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d33990f3-1636-47d9-9082-ad6c1cac6a7d 00:09:45.151 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d33990f3-1636-47d9-9082-ad6c1cac6a7d ']' 00:09:45.151 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:45.151 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.151 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.151 [2024-12-06 09:47:10.247700] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:45.151 [2024-12-06 09:47:10.247777] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:45.151 [2024-12-06 09:47:10.247893] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.151 [2024-12-06 09:47:10.247991] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:45.151 [2024-12-06 09:47:10.248041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:45.151 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.151 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.151 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.151 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:45.151 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.151 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.151 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:45.151 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:45.151 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:45.151 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:45.151 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.151 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.151 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.151 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:45.151 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:45.151 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.151 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.151 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.151 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:45.151 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:45.152 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.152 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.152 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.152 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:45.152 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.152 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.152 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:45.152 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.152 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:45.152 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:45.152 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:45.152 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:45.152 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:45.152 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:45.152 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:45.152 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:45.152 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:45.152 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.152 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.152 [2024-12-06 09:47:10.399528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:45.152 [2024-12-06 09:47:10.401542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:45.152 [2024-12-06 09:47:10.401649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:45.152 [2024-12-06 09:47:10.401725] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:45.152 [2024-12-06 09:47:10.401822] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:45.152 [2024-12-06 09:47:10.401898] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:45.152 [2024-12-06 09:47:10.401951] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:45.152 [2024-12-06 09:47:10.401999] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:45.152 request: 00:09:45.152 { 00:09:45.152 "name": "raid_bdev1", 00:09:45.152 "raid_level": "raid1", 00:09:45.152 "base_bdevs": [ 00:09:45.152 "malloc1", 00:09:45.152 "malloc2", 00:09:45.152 "malloc3" 00:09:45.152 ], 00:09:45.152 "superblock": false, 00:09:45.152 "method": "bdev_raid_create", 00:09:45.152 "req_id": 1 00:09:45.152 } 00:09:45.152 Got JSON-RPC error response 00:09:45.152 response: 00:09:45.152 { 00:09:45.152 "code": -17, 00:09:45.152 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:45.152 } 00:09:45.152 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:45.152 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:45.152 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:45.152 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:45.152 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:45.152 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.152 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.152 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.152 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:45.410 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.410 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:45.410 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:45.410 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:45.410 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.410 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.410 [2024-12-06 09:47:10.467331] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:45.410 [2024-12-06 09:47:10.467423] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.410 [2024-12-06 09:47:10.467462] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:45.410 [2024-12-06 09:47:10.467489] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.410 [2024-12-06 09:47:10.469698] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.410 [2024-12-06 09:47:10.469785] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:45.410 [2024-12-06 09:47:10.469888] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:45.410 [2024-12-06 09:47:10.469971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:45.410 pt1 00:09:45.410 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.410 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:45.410 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.410 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.410 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.410 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.410 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.410 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.410 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.410 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.410 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.410 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.411 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.411 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.411 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.411 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.411 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.411 "name": "raid_bdev1", 00:09:45.411 "uuid": "d33990f3-1636-47d9-9082-ad6c1cac6a7d", 00:09:45.411 "strip_size_kb": 0, 00:09:45.411 "state": "configuring", 00:09:45.411 "raid_level": "raid1", 00:09:45.411 "superblock": true, 00:09:45.411 "num_base_bdevs": 3, 00:09:45.411 "num_base_bdevs_discovered": 1, 00:09:45.411 "num_base_bdevs_operational": 3, 00:09:45.411 "base_bdevs_list": [ 00:09:45.411 { 00:09:45.411 "name": "pt1", 00:09:45.411 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:45.411 "is_configured": true, 00:09:45.411 "data_offset": 2048, 00:09:45.411 "data_size": 63488 00:09:45.411 }, 00:09:45.411 { 00:09:45.411 "name": null, 00:09:45.411 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:45.411 "is_configured": false, 00:09:45.411 "data_offset": 2048, 00:09:45.411 "data_size": 63488 00:09:45.411 }, 00:09:45.411 { 00:09:45.411 "name": null, 00:09:45.411 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:45.411 "is_configured": false, 00:09:45.411 "data_offset": 2048, 00:09:45.411 "data_size": 63488 00:09:45.411 } 00:09:45.411 ] 00:09:45.411 }' 00:09:45.411 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.411 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.669 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:45.669 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:45.669 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.669 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.669 [2024-12-06 09:47:10.902619] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:45.669 [2024-12-06 09:47:10.902744] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.669 [2024-12-06 09:47:10.902785] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:45.669 [2024-12-06 09:47:10.902814] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.669 [2024-12-06 09:47:10.903294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.669 [2024-12-06 09:47:10.903351] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:45.669 [2024-12-06 09:47:10.903467] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:45.669 [2024-12-06 09:47:10.903516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:45.669 pt2 00:09:45.669 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.669 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:45.669 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.669 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.669 [2024-12-06 09:47:10.914573] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:45.669 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.669 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:45.669 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.669 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.669 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.669 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.669 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.669 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.669 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.669 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.669 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.669 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.669 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.669 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.669 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.927 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.927 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.927 "name": "raid_bdev1", 00:09:45.927 "uuid": "d33990f3-1636-47d9-9082-ad6c1cac6a7d", 00:09:45.927 "strip_size_kb": 0, 00:09:45.927 "state": "configuring", 00:09:45.927 "raid_level": "raid1", 00:09:45.927 "superblock": true, 00:09:45.927 "num_base_bdevs": 3, 00:09:45.927 "num_base_bdevs_discovered": 1, 00:09:45.927 "num_base_bdevs_operational": 3, 00:09:45.927 "base_bdevs_list": [ 00:09:45.927 { 00:09:45.927 "name": "pt1", 00:09:45.927 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:45.927 "is_configured": true, 00:09:45.927 "data_offset": 2048, 00:09:45.927 "data_size": 63488 00:09:45.927 }, 00:09:45.927 { 00:09:45.927 "name": null, 00:09:45.927 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:45.927 "is_configured": false, 00:09:45.927 "data_offset": 0, 00:09:45.927 "data_size": 63488 00:09:45.927 }, 00:09:45.927 { 00:09:45.927 "name": null, 00:09:45.927 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:45.927 "is_configured": false, 00:09:45.927 "data_offset": 2048, 00:09:45.927 "data_size": 63488 00:09:45.927 } 00:09:45.927 ] 00:09:45.927 }' 00:09:45.927 09:47:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.927 09:47:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.202 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:46.202 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:46.202 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:46.202 09:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.202 09:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.202 [2024-12-06 09:47:11.337841] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:46.202 [2024-12-06 09:47:11.337958] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.202 [2024-12-06 09:47:11.337998] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:46.202 [2024-12-06 09:47:11.338028] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.202 [2024-12-06 09:47:11.338520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.202 [2024-12-06 09:47:11.338587] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:46.202 [2024-12-06 09:47:11.338698] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:46.202 [2024-12-06 09:47:11.338763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:46.202 pt2 00:09:46.202 09:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.202 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:46.202 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:46.202 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:46.202 09:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.202 09:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.202 [2024-12-06 09:47:11.349798] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:46.202 [2024-12-06 09:47:11.349900] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.202 [2024-12-06 09:47:11.349931] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:46.202 [2024-12-06 09:47:11.349960] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.202 [2024-12-06 09:47:11.350400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.202 [2024-12-06 09:47:11.350472] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:46.202 [2024-12-06 09:47:11.350564] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:46.202 [2024-12-06 09:47:11.350623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:46.202 [2024-12-06 09:47:11.350792] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:46.202 [2024-12-06 09:47:11.350839] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:46.202 [2024-12-06 09:47:11.351127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:46.202 [2024-12-06 09:47:11.351365] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:46.202 [2024-12-06 09:47:11.351428] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:46.202 [2024-12-06 09:47:11.351639] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.202 pt3 00:09:46.202 09:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.202 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:46.202 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:46.202 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:46.202 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:46.202 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.202 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.202 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.202 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.202 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.202 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.202 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.202 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.202 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.202 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.202 09:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.202 09:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.202 09:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.202 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.202 "name": "raid_bdev1", 00:09:46.202 "uuid": "d33990f3-1636-47d9-9082-ad6c1cac6a7d", 00:09:46.202 "strip_size_kb": 0, 00:09:46.202 "state": "online", 00:09:46.202 "raid_level": "raid1", 00:09:46.202 "superblock": true, 00:09:46.202 "num_base_bdevs": 3, 00:09:46.202 "num_base_bdevs_discovered": 3, 00:09:46.202 "num_base_bdevs_operational": 3, 00:09:46.202 "base_bdevs_list": [ 00:09:46.202 { 00:09:46.202 "name": "pt1", 00:09:46.202 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:46.202 "is_configured": true, 00:09:46.202 "data_offset": 2048, 00:09:46.202 "data_size": 63488 00:09:46.202 }, 00:09:46.202 { 00:09:46.202 "name": "pt2", 00:09:46.202 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:46.202 "is_configured": true, 00:09:46.202 "data_offset": 2048, 00:09:46.202 "data_size": 63488 00:09:46.202 }, 00:09:46.202 { 00:09:46.202 "name": "pt3", 00:09:46.202 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:46.202 "is_configured": true, 00:09:46.202 "data_offset": 2048, 00:09:46.202 "data_size": 63488 00:09:46.202 } 00:09:46.202 ] 00:09:46.202 }' 00:09:46.202 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.202 09:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.462 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:46.462 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:46.462 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:46.462 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:46.462 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:46.462 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:46.462 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:46.462 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:46.462 09:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.462 09:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.462 [2024-12-06 09:47:11.725551] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.721 09:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.721 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:46.721 "name": "raid_bdev1", 00:09:46.721 "aliases": [ 00:09:46.721 "d33990f3-1636-47d9-9082-ad6c1cac6a7d" 00:09:46.721 ], 00:09:46.721 "product_name": "Raid Volume", 00:09:46.721 "block_size": 512, 00:09:46.721 "num_blocks": 63488, 00:09:46.721 "uuid": "d33990f3-1636-47d9-9082-ad6c1cac6a7d", 00:09:46.721 "assigned_rate_limits": { 00:09:46.721 "rw_ios_per_sec": 0, 00:09:46.721 "rw_mbytes_per_sec": 0, 00:09:46.721 "r_mbytes_per_sec": 0, 00:09:46.721 "w_mbytes_per_sec": 0 00:09:46.721 }, 00:09:46.721 "claimed": false, 00:09:46.721 "zoned": false, 00:09:46.721 "supported_io_types": { 00:09:46.721 "read": true, 00:09:46.721 "write": true, 00:09:46.721 "unmap": false, 00:09:46.721 "flush": false, 00:09:46.721 "reset": true, 00:09:46.721 "nvme_admin": false, 00:09:46.721 "nvme_io": false, 00:09:46.721 "nvme_io_md": false, 00:09:46.721 "write_zeroes": true, 00:09:46.721 "zcopy": false, 00:09:46.721 "get_zone_info": false, 00:09:46.721 "zone_management": false, 00:09:46.721 "zone_append": false, 00:09:46.721 "compare": false, 00:09:46.721 "compare_and_write": false, 00:09:46.721 "abort": false, 00:09:46.721 "seek_hole": false, 00:09:46.721 "seek_data": false, 00:09:46.721 "copy": false, 00:09:46.721 "nvme_iov_md": false 00:09:46.721 }, 00:09:46.721 "memory_domains": [ 00:09:46.721 { 00:09:46.721 "dma_device_id": "system", 00:09:46.721 "dma_device_type": 1 00:09:46.721 }, 00:09:46.721 { 00:09:46.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.721 "dma_device_type": 2 00:09:46.721 }, 00:09:46.721 { 00:09:46.721 "dma_device_id": "system", 00:09:46.721 "dma_device_type": 1 00:09:46.721 }, 00:09:46.721 { 00:09:46.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.721 "dma_device_type": 2 00:09:46.721 }, 00:09:46.721 { 00:09:46.721 "dma_device_id": "system", 00:09:46.721 "dma_device_type": 1 00:09:46.721 }, 00:09:46.721 { 00:09:46.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.721 "dma_device_type": 2 00:09:46.721 } 00:09:46.721 ], 00:09:46.721 "driver_specific": { 00:09:46.721 "raid": { 00:09:46.721 "uuid": "d33990f3-1636-47d9-9082-ad6c1cac6a7d", 00:09:46.721 "strip_size_kb": 0, 00:09:46.721 "state": "online", 00:09:46.721 "raid_level": "raid1", 00:09:46.721 "superblock": true, 00:09:46.721 "num_base_bdevs": 3, 00:09:46.721 "num_base_bdevs_discovered": 3, 00:09:46.721 "num_base_bdevs_operational": 3, 00:09:46.721 "base_bdevs_list": [ 00:09:46.721 { 00:09:46.721 "name": "pt1", 00:09:46.721 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:46.721 "is_configured": true, 00:09:46.721 "data_offset": 2048, 00:09:46.721 "data_size": 63488 00:09:46.721 }, 00:09:46.721 { 00:09:46.721 "name": "pt2", 00:09:46.721 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:46.721 "is_configured": true, 00:09:46.721 "data_offset": 2048, 00:09:46.721 "data_size": 63488 00:09:46.721 }, 00:09:46.721 { 00:09:46.721 "name": "pt3", 00:09:46.721 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:46.721 "is_configured": true, 00:09:46.721 "data_offset": 2048, 00:09:46.721 "data_size": 63488 00:09:46.721 } 00:09:46.721 ] 00:09:46.721 } 00:09:46.721 } 00:09:46.721 }' 00:09:46.721 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:46.721 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:46.721 pt2 00:09:46.721 pt3' 00:09:46.721 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.721 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:46.721 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.721 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:46.721 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.721 09:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.721 09:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.721 09:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.721 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.721 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.721 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.721 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.721 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:46.721 09:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.721 09:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.722 09:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.722 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.722 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.722 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.722 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:46.722 09:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.722 09:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.722 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.722 09:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.722 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.722 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.722 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:46.722 09:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.722 09:47:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.722 09:47:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:46.981 [2024-12-06 09:47:11.997041] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.981 09:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.981 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d33990f3-1636-47d9-9082-ad6c1cac6a7d '!=' d33990f3-1636-47d9-9082-ad6c1cac6a7d ']' 00:09:46.981 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:46.981 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:46.981 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:46.981 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:46.981 09:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.981 09:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.981 [2024-12-06 09:47:12.028707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:46.981 09:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.982 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:46.982 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:46.982 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.982 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.982 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.982 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:46.982 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.982 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.982 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.982 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.982 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.982 09:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.982 09:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.982 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.982 09:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.982 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.982 "name": "raid_bdev1", 00:09:46.982 "uuid": "d33990f3-1636-47d9-9082-ad6c1cac6a7d", 00:09:46.982 "strip_size_kb": 0, 00:09:46.982 "state": "online", 00:09:46.982 "raid_level": "raid1", 00:09:46.982 "superblock": true, 00:09:46.982 "num_base_bdevs": 3, 00:09:46.982 "num_base_bdevs_discovered": 2, 00:09:46.982 "num_base_bdevs_operational": 2, 00:09:46.982 "base_bdevs_list": [ 00:09:46.982 { 00:09:46.982 "name": null, 00:09:46.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.982 "is_configured": false, 00:09:46.982 "data_offset": 0, 00:09:46.982 "data_size": 63488 00:09:46.982 }, 00:09:46.982 { 00:09:46.982 "name": "pt2", 00:09:46.982 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:46.982 "is_configured": true, 00:09:46.982 "data_offset": 2048, 00:09:46.982 "data_size": 63488 00:09:46.982 }, 00:09:46.982 { 00:09:46.982 "name": "pt3", 00:09:46.982 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:46.982 "is_configured": true, 00:09:46.982 "data_offset": 2048, 00:09:46.982 "data_size": 63488 00:09:46.982 } 00:09:46.982 ] 00:09:46.982 }' 00:09:46.982 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.982 09:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.242 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:47.242 09:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.242 09:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.242 [2024-12-06 09:47:12.432012] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:47.242 [2024-12-06 09:47:12.432100] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:47.242 [2024-12-06 09:47:12.432229] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.242 [2024-12-06 09:47:12.432313] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:47.242 [2024-12-06 09:47:12.432365] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:47.242 09:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.242 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.242 09:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.242 09:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.242 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:47.242 09:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.242 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:47.242 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:47.242 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:47.242 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:47.242 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:47.242 09:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.242 09:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.242 09:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.242 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:47.242 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:47.242 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:47.242 09:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.242 09:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.242 09:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.242 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:47.242 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:47.242 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:47.501 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:47.501 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:47.501 09:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.501 09:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.501 [2024-12-06 09:47:12.519854] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:47.501 [2024-12-06 09:47:12.519959] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.501 [2024-12-06 09:47:12.519996] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:47.501 [2024-12-06 09:47:12.520026] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.501 [2024-12-06 09:47:12.522318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.502 [2024-12-06 09:47:12.522398] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:47.502 [2024-12-06 09:47:12.522509] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:47.502 [2024-12-06 09:47:12.522586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:47.502 pt2 00:09:47.502 09:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.502 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:47.502 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:47.502 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.502 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:47.502 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:47.502 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:47.502 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.502 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.502 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.502 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.502 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.502 09:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.502 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.502 09:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.502 09:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.502 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.502 "name": "raid_bdev1", 00:09:47.502 "uuid": "d33990f3-1636-47d9-9082-ad6c1cac6a7d", 00:09:47.502 "strip_size_kb": 0, 00:09:47.502 "state": "configuring", 00:09:47.502 "raid_level": "raid1", 00:09:47.502 "superblock": true, 00:09:47.502 "num_base_bdevs": 3, 00:09:47.502 "num_base_bdevs_discovered": 1, 00:09:47.502 "num_base_bdevs_operational": 2, 00:09:47.502 "base_bdevs_list": [ 00:09:47.502 { 00:09:47.502 "name": null, 00:09:47.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.502 "is_configured": false, 00:09:47.502 "data_offset": 2048, 00:09:47.502 "data_size": 63488 00:09:47.502 }, 00:09:47.502 { 00:09:47.502 "name": "pt2", 00:09:47.502 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.502 "is_configured": true, 00:09:47.502 "data_offset": 2048, 00:09:47.502 "data_size": 63488 00:09:47.502 }, 00:09:47.502 { 00:09:47.502 "name": null, 00:09:47.502 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:47.502 "is_configured": false, 00:09:47.502 "data_offset": 2048, 00:09:47.502 "data_size": 63488 00:09:47.502 } 00:09:47.502 ] 00:09:47.502 }' 00:09:47.502 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.502 09:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.761 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:47.761 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:47.761 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:47.761 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:47.761 09:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.761 09:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.761 [2024-12-06 09:47:12.959190] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:47.761 [2024-12-06 09:47:12.959302] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.761 [2024-12-06 09:47:12.959339] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:47.761 [2024-12-06 09:47:12.959369] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.761 [2024-12-06 09:47:12.959872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.761 [2024-12-06 09:47:12.959938] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:47.761 [2024-12-06 09:47:12.960074] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:47.761 [2024-12-06 09:47:12.960132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:47.761 [2024-12-06 09:47:12.960312] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:47.761 [2024-12-06 09:47:12.960355] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:47.761 [2024-12-06 09:47:12.960644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:47.761 [2024-12-06 09:47:12.960839] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:47.761 [2024-12-06 09:47:12.960883] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:47.761 [2024-12-06 09:47:12.961079] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.761 pt3 00:09:47.761 09:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.761 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:47.761 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:47.761 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.761 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:47.761 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:47.761 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:47.761 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.761 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.761 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.761 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.761 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.761 09:47:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.761 09:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.761 09:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.761 09:47:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.761 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.761 "name": "raid_bdev1", 00:09:47.761 "uuid": "d33990f3-1636-47d9-9082-ad6c1cac6a7d", 00:09:47.761 "strip_size_kb": 0, 00:09:47.761 "state": "online", 00:09:47.761 "raid_level": "raid1", 00:09:47.761 "superblock": true, 00:09:47.761 "num_base_bdevs": 3, 00:09:47.761 "num_base_bdevs_discovered": 2, 00:09:47.761 "num_base_bdevs_operational": 2, 00:09:47.761 "base_bdevs_list": [ 00:09:47.761 { 00:09:47.761 "name": null, 00:09:47.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.761 "is_configured": false, 00:09:47.761 "data_offset": 2048, 00:09:47.761 "data_size": 63488 00:09:47.761 }, 00:09:47.761 { 00:09:47.761 "name": "pt2", 00:09:47.761 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.761 "is_configured": true, 00:09:47.761 "data_offset": 2048, 00:09:47.761 "data_size": 63488 00:09:47.761 }, 00:09:47.761 { 00:09:47.761 "name": "pt3", 00:09:47.761 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:47.761 "is_configured": true, 00:09:47.761 "data_offset": 2048, 00:09:47.761 "data_size": 63488 00:09:47.761 } 00:09:47.761 ] 00:09:47.761 }' 00:09:47.761 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.761 09:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.330 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:48.330 09:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.330 09:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.330 [2024-12-06 09:47:13.430322] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:48.330 [2024-12-06 09:47:13.430398] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:48.330 [2024-12-06 09:47:13.430492] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:48.330 [2024-12-06 09:47:13.430568] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:48.330 [2024-12-06 09:47:13.430617] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:48.330 09:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.330 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:48.330 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.330 09:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.330 09:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.330 09:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.330 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:48.330 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:48.330 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:48.330 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:48.330 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:48.330 09:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.330 09:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.330 09:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.330 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:48.330 09:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.330 09:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.330 [2024-12-06 09:47:13.494248] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:48.330 [2024-12-06 09:47:13.494338] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.330 [2024-12-06 09:47:13.494373] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:48.330 [2024-12-06 09:47:13.494400] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.330 [2024-12-06 09:47:13.496533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.330 [2024-12-06 09:47:13.496603] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:48.330 [2024-12-06 09:47:13.496705] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:48.330 [2024-12-06 09:47:13.496771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:48.331 [2024-12-06 09:47:13.496926] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:48.331 [2024-12-06 09:47:13.496976] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:48.331 [2024-12-06 09:47:13.497025] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:48.331 [2024-12-06 09:47:13.497116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:48.331 pt1 00:09:48.331 09:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.331 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:48.331 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:48.331 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.331 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.331 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.331 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.331 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:48.331 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.331 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.331 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.331 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.331 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.331 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.331 09:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.331 09:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.331 09:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.331 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.331 "name": "raid_bdev1", 00:09:48.331 "uuid": "d33990f3-1636-47d9-9082-ad6c1cac6a7d", 00:09:48.331 "strip_size_kb": 0, 00:09:48.331 "state": "configuring", 00:09:48.331 "raid_level": "raid1", 00:09:48.331 "superblock": true, 00:09:48.331 "num_base_bdevs": 3, 00:09:48.331 "num_base_bdevs_discovered": 1, 00:09:48.331 "num_base_bdevs_operational": 2, 00:09:48.331 "base_bdevs_list": [ 00:09:48.331 { 00:09:48.331 "name": null, 00:09:48.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.331 "is_configured": false, 00:09:48.331 "data_offset": 2048, 00:09:48.331 "data_size": 63488 00:09:48.331 }, 00:09:48.331 { 00:09:48.331 "name": "pt2", 00:09:48.331 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.331 "is_configured": true, 00:09:48.331 "data_offset": 2048, 00:09:48.331 "data_size": 63488 00:09:48.331 }, 00:09:48.331 { 00:09:48.331 "name": null, 00:09:48.331 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:48.331 "is_configured": false, 00:09:48.331 "data_offset": 2048, 00:09:48.331 "data_size": 63488 00:09:48.331 } 00:09:48.331 ] 00:09:48.331 }' 00:09:48.331 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.331 09:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.900 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:48.900 09:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.900 09:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.900 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:48.900 09:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.900 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:48.900 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:48.900 09:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.900 09:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.900 [2024-12-06 09:47:13.965431] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:48.900 [2024-12-06 09:47:13.965563] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.900 [2024-12-06 09:47:13.965595] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:09:48.900 [2024-12-06 09:47:13.965606] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.900 [2024-12-06 09:47:13.966110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.900 [2024-12-06 09:47:13.966130] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:48.900 [2024-12-06 09:47:13.966251] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:48.900 [2024-12-06 09:47:13.966275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:48.900 [2024-12-06 09:47:13.966417] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:48.900 [2024-12-06 09:47:13.966427] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:48.900 [2024-12-06 09:47:13.966697] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:48.900 [2024-12-06 09:47:13.966859] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:48.900 [2024-12-06 09:47:13.966876] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:48.900 [2024-12-06 09:47:13.967023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.900 pt3 00:09:48.900 09:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.900 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:48.900 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.900 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.900 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.900 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.900 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:48.900 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.900 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.900 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.900 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.900 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.900 09:47:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.900 09:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.900 09:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.900 09:47:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.900 09:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.900 "name": "raid_bdev1", 00:09:48.900 "uuid": "d33990f3-1636-47d9-9082-ad6c1cac6a7d", 00:09:48.900 "strip_size_kb": 0, 00:09:48.900 "state": "online", 00:09:48.900 "raid_level": "raid1", 00:09:48.900 "superblock": true, 00:09:48.901 "num_base_bdevs": 3, 00:09:48.901 "num_base_bdevs_discovered": 2, 00:09:48.901 "num_base_bdevs_operational": 2, 00:09:48.901 "base_bdevs_list": [ 00:09:48.901 { 00:09:48.901 "name": null, 00:09:48.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.901 "is_configured": false, 00:09:48.901 "data_offset": 2048, 00:09:48.901 "data_size": 63488 00:09:48.901 }, 00:09:48.901 { 00:09:48.901 "name": "pt2", 00:09:48.901 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.901 "is_configured": true, 00:09:48.901 "data_offset": 2048, 00:09:48.901 "data_size": 63488 00:09:48.901 }, 00:09:48.901 { 00:09:48.901 "name": "pt3", 00:09:48.901 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:48.901 "is_configured": true, 00:09:48.901 "data_offset": 2048, 00:09:48.901 "data_size": 63488 00:09:48.901 } 00:09:48.901 ] 00:09:48.901 }' 00:09:48.901 09:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.901 09:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.161 09:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:49.161 09:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:49.161 09:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.161 09:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.161 09:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.161 09:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:49.161 09:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:49.161 09:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:49.161 09:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.161 09:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.161 [2024-12-06 09:47:14.404959] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:49.161 09:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.422 09:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' d33990f3-1636-47d9-9082-ad6c1cac6a7d '!=' d33990f3-1636-47d9-9082-ad6c1cac6a7d ']' 00:09:49.422 09:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68588 00:09:49.422 09:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68588 ']' 00:09:49.422 09:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68588 00:09:49.422 09:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:49.422 09:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:49.422 09:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68588 00:09:49.422 09:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:49.422 09:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:49.422 09:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68588' 00:09:49.422 killing process with pid 68588 00:09:49.422 09:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68588 00:09:49.422 [2024-12-06 09:47:14.483557] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:49.422 [2024-12-06 09:47:14.483657] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:49.422 [2024-12-06 09:47:14.483720] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:49.422 [2024-12-06 09:47:14.483732] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:49.422 09:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68588 00:09:49.681 [2024-12-06 09:47:14.780812] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:50.631 09:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:50.631 00:09:50.631 real 0m7.534s 00:09:50.631 user 0m11.797s 00:09:50.631 sys 0m1.322s 00:09:50.631 09:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.631 ************************************ 00:09:50.631 END TEST raid_superblock_test 00:09:50.631 ************************************ 00:09:50.631 09:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.891 09:47:15 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:09:50.891 09:47:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:50.891 09:47:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.891 09:47:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:50.891 ************************************ 00:09:50.891 START TEST raid_read_error_test 00:09:50.891 ************************************ 00:09:50.891 09:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:09:50.891 09:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:50.891 09:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:50.891 09:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:50.891 09:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:50.891 09:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:50.891 09:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:50.891 09:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:50.891 09:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:50.891 09:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:50.891 09:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:50.891 09:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:50.891 09:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:50.891 09:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:50.891 09:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:50.891 09:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:50.891 09:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:50.891 09:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:50.891 09:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:50.891 09:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:50.891 09:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:50.891 09:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:50.891 09:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:50.891 09:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:50.891 09:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:50.891 09:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xzovp3C9DG 00:09:50.891 09:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69035 00:09:50.891 09:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:50.891 09:47:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69035 00:09:50.891 09:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69035 ']' 00:09:50.891 09:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.891 09:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:50.891 09:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.891 09:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:50.891 09:47:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.891 [2024-12-06 09:47:16.058043] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:09:50.891 [2024-12-06 09:47:16.058286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69035 ] 00:09:51.151 [2024-12-06 09:47:16.232783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.151 [2024-12-06 09:47:16.344251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.410 [2024-12-06 09:47:16.537842] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:51.410 [2024-12-06 09:47:16.537908] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:51.670 09:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.670 09:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:51.670 09:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:51.670 09:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:51.670 09:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.670 09:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.938 BaseBdev1_malloc 00:09:51.938 09:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.938 09:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:51.938 09:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.938 09:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.938 true 00:09:51.938 09:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.938 09:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:51.938 09:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.938 09:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.938 [2024-12-06 09:47:16.967002] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:51.938 [2024-12-06 09:47:16.967134] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.938 [2024-12-06 09:47:16.967180] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:51.938 [2024-12-06 09:47:16.967195] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.938 BaseBdev1 00:09:51.938 [2024-12-06 09:47:16.969631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.938 [2024-12-06 09:47:16.969676] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:51.938 09:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.938 09:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:51.938 09:47:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:51.938 09:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.938 09:47:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.938 BaseBdev2_malloc 00:09:51.938 09:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.938 09:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:51.938 09:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.938 09:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.939 true 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.939 [2024-12-06 09:47:17.034057] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:51.939 [2024-12-06 09:47:17.034191] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.939 [2024-12-06 09:47:17.034230] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:51.939 [2024-12-06 09:47:17.034284] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.939 [2024-12-06 09:47:17.036560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.939 [2024-12-06 09:47:17.036667] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:51.939 BaseBdev2 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.939 BaseBdev3_malloc 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.939 true 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.939 [2024-12-06 09:47:17.106444] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:51.939 [2024-12-06 09:47:17.106570] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.939 [2024-12-06 09:47:17.106593] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:51.939 [2024-12-06 09:47:17.106604] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.939 [2024-12-06 09:47:17.108639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.939 [2024-12-06 09:47:17.108679] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:51.939 BaseBdev3 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.939 [2024-12-06 09:47:17.118485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:51.939 [2024-12-06 09:47:17.120276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:51.939 [2024-12-06 09:47:17.120406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:51.939 [2024-12-06 09:47:17.120636] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:51.939 [2024-12-06 09:47:17.120684] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:51.939 [2024-12-06 09:47:17.120939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:51.939 [2024-12-06 09:47:17.121159] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:51.939 [2024-12-06 09:47:17.121204] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:51.939 [2024-12-06 09:47:17.121378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.939 "name": "raid_bdev1", 00:09:51.939 "uuid": "317fc014-3648-4a56-95bf-8abcc80a65fe", 00:09:51.939 "strip_size_kb": 0, 00:09:51.939 "state": "online", 00:09:51.939 "raid_level": "raid1", 00:09:51.939 "superblock": true, 00:09:51.939 "num_base_bdevs": 3, 00:09:51.939 "num_base_bdevs_discovered": 3, 00:09:51.939 "num_base_bdevs_operational": 3, 00:09:51.939 "base_bdevs_list": [ 00:09:51.939 { 00:09:51.939 "name": "BaseBdev1", 00:09:51.939 "uuid": "deca2ce3-8ad8-5693-b5fa-be79c86bf1b3", 00:09:51.939 "is_configured": true, 00:09:51.939 "data_offset": 2048, 00:09:51.939 "data_size": 63488 00:09:51.939 }, 00:09:51.939 { 00:09:51.939 "name": "BaseBdev2", 00:09:51.939 "uuid": "a08d5fcd-d5ac-5fca-b049-ca2eb31916ef", 00:09:51.939 "is_configured": true, 00:09:51.939 "data_offset": 2048, 00:09:51.939 "data_size": 63488 00:09:51.939 }, 00:09:51.939 { 00:09:51.939 "name": "BaseBdev3", 00:09:51.939 "uuid": "4a33c86c-9fa6-50cc-970b-362ebbb5c2b4", 00:09:51.939 "is_configured": true, 00:09:51.939 "data_offset": 2048, 00:09:51.939 "data_size": 63488 00:09:51.939 } 00:09:51.939 ] 00:09:51.939 }' 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.939 09:47:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.505 09:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:52.506 09:47:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:52.506 [2024-12-06 09:47:17.662688] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:53.441 09:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:53.441 09:47:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.441 09:47:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.441 09:47:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.441 09:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:53.441 09:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:53.441 09:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:53.441 09:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:53.441 09:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:53.441 09:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:53.441 09:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:53.441 09:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.441 09:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.441 09:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.441 09:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.441 09:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.441 09:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.441 09:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.441 09:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.441 09:47:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.441 09:47:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.441 09:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:53.441 09:47:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.441 09:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.441 "name": "raid_bdev1", 00:09:53.441 "uuid": "317fc014-3648-4a56-95bf-8abcc80a65fe", 00:09:53.441 "strip_size_kb": 0, 00:09:53.441 "state": "online", 00:09:53.441 "raid_level": "raid1", 00:09:53.441 "superblock": true, 00:09:53.441 "num_base_bdevs": 3, 00:09:53.441 "num_base_bdevs_discovered": 3, 00:09:53.441 "num_base_bdevs_operational": 3, 00:09:53.441 "base_bdevs_list": [ 00:09:53.441 { 00:09:53.442 "name": "BaseBdev1", 00:09:53.442 "uuid": "deca2ce3-8ad8-5693-b5fa-be79c86bf1b3", 00:09:53.442 "is_configured": true, 00:09:53.442 "data_offset": 2048, 00:09:53.442 "data_size": 63488 00:09:53.442 }, 00:09:53.442 { 00:09:53.442 "name": "BaseBdev2", 00:09:53.442 "uuid": "a08d5fcd-d5ac-5fca-b049-ca2eb31916ef", 00:09:53.442 "is_configured": true, 00:09:53.442 "data_offset": 2048, 00:09:53.442 "data_size": 63488 00:09:53.442 }, 00:09:53.442 { 00:09:53.442 "name": "BaseBdev3", 00:09:53.442 "uuid": "4a33c86c-9fa6-50cc-970b-362ebbb5c2b4", 00:09:53.442 "is_configured": true, 00:09:53.442 "data_offset": 2048, 00:09:53.442 "data_size": 63488 00:09:53.442 } 00:09:53.442 ] 00:09:53.442 }' 00:09:53.442 09:47:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.442 09:47:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.011 09:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:54.011 09:47:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.011 09:47:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.011 [2024-12-06 09:47:19.086963] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:54.011 [2024-12-06 09:47:19.087054] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:54.011 [2024-12-06 09:47:19.089763] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:54.011 [2024-12-06 09:47:19.089868] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.011 [2024-12-06 09:47:19.090005] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:54.011 [2024-12-06 09:47:19.090059] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:54.011 { 00:09:54.011 "results": [ 00:09:54.011 { 00:09:54.011 "job": "raid_bdev1", 00:09:54.011 "core_mask": "0x1", 00:09:54.011 "workload": "randrw", 00:09:54.011 "percentage": 50, 00:09:54.011 "status": "finished", 00:09:54.011 "queue_depth": 1, 00:09:54.011 "io_size": 131072, 00:09:54.011 "runtime": 1.425352, 00:09:54.011 "iops": 13325.831092951075, 00:09:54.011 "mibps": 1665.7288866188844, 00:09:54.011 "io_failed": 0, 00:09:54.011 "io_timeout": 0, 00:09:54.011 "avg_latency_us": 72.4072096313568, 00:09:54.011 "min_latency_us": 22.805240174672488, 00:09:54.011 "max_latency_us": 1531.0812227074236 00:09:54.011 } 00:09:54.011 ], 00:09:54.011 "core_count": 1 00:09:54.011 } 00:09:54.011 09:47:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.011 09:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69035 00:09:54.011 09:47:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69035 ']' 00:09:54.011 09:47:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69035 00:09:54.011 09:47:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:54.011 09:47:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:54.011 09:47:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69035 00:09:54.011 09:47:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:54.011 09:47:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:54.011 09:47:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69035' 00:09:54.011 killing process with pid 69035 00:09:54.011 09:47:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69035 00:09:54.011 [2024-12-06 09:47:19.121687] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:54.011 09:47:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69035 00:09:54.270 [2024-12-06 09:47:19.352778] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:55.649 09:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xzovp3C9DG 00:09:55.649 09:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:55.649 09:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:55.649 09:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:55.649 09:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:55.649 ************************************ 00:09:55.649 END TEST raid_read_error_test 00:09:55.649 ************************************ 00:09:55.649 09:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:55.649 09:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:55.649 09:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:55.649 00:09:55.649 real 0m4.590s 00:09:55.649 user 0m5.490s 00:09:55.649 sys 0m0.550s 00:09:55.649 09:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.649 09:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.649 09:47:20 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:09:55.649 09:47:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:55.649 09:47:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.649 09:47:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:55.649 ************************************ 00:09:55.649 START TEST raid_write_error_test 00:09:55.649 ************************************ 00:09:55.649 09:47:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:09:55.649 09:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:55.649 09:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:55.649 09:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:55.649 09:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:55.649 09:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:55.649 09:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:55.649 09:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:55.649 09:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:55.649 09:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:55.649 09:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:55.649 09:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:55.649 09:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:55.649 09:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:55.649 09:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:55.649 09:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:55.649 09:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:55.649 09:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:55.649 09:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:55.649 09:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:55.649 09:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:55.649 09:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:55.649 09:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:55.649 09:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:55.649 09:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:55.649 09:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pwaGqeBGKw 00:09:55.649 09:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69175 00:09:55.649 09:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69175 00:09:55.649 09:47:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:55.649 09:47:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69175 ']' 00:09:55.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.649 09:47:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.649 09:47:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.649 09:47:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.649 09:47:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.649 09:47:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.649 [2024-12-06 09:47:20.723087] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:09:55.649 [2024-12-06 09:47:20.723730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69175 ] 00:09:55.649 [2024-12-06 09:47:20.899347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.908 [2024-12-06 09:47:21.011847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.167 [2024-12-06 09:47:21.213483] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.167 [2024-12-06 09:47:21.213551] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.427 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.427 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:56.427 09:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:56.427 09:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:56.427 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.427 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.427 BaseBdev1_malloc 00:09:56.427 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.427 09:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:56.427 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.427 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.427 true 00:09:56.427 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.427 09:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:56.427 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.427 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.427 [2024-12-06 09:47:21.633569] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:56.427 [2024-12-06 09:47:21.633675] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.427 [2024-12-06 09:47:21.633713] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:56.427 [2024-12-06 09:47:21.633743] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.427 [2024-12-06 09:47:21.635862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.427 [2024-12-06 09:47:21.635950] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:56.427 BaseBdev1 00:09:56.427 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.427 09:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:56.427 09:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:56.427 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.427 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.427 BaseBdev2_malloc 00:09:56.427 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.427 09:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:56.427 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.427 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.427 true 00:09:56.427 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.427 09:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:56.427 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.427 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.687 [2024-12-06 09:47:21.699692] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:56.687 [2024-12-06 09:47:21.699760] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.687 [2024-12-06 09:47:21.699776] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:56.687 [2024-12-06 09:47:21.699802] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.687 [2024-12-06 09:47:21.701831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.687 [2024-12-06 09:47:21.701871] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:56.687 BaseBdev2 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.687 BaseBdev3_malloc 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.687 true 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.687 [2024-12-06 09:47:21.779253] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:56.687 [2024-12-06 09:47:21.779311] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.687 [2024-12-06 09:47:21.779346] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:56.687 [2024-12-06 09:47:21.779356] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.687 [2024-12-06 09:47:21.781541] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.687 [2024-12-06 09:47:21.781631] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:56.687 BaseBdev3 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.687 [2024-12-06 09:47:21.791316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:56.687 [2024-12-06 09:47:21.793070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:56.687 [2024-12-06 09:47:21.793192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:56.687 [2024-12-06 09:47:21.793432] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:56.687 [2024-12-06 09:47:21.793479] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:56.687 [2024-12-06 09:47:21.793739] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:56.687 [2024-12-06 09:47:21.793895] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:56.687 [2024-12-06 09:47:21.793906] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:56.687 [2024-12-06 09:47:21.794038] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.687 "name": "raid_bdev1", 00:09:56.687 "uuid": "6dfb3151-3fa1-4a6c-9599-fd00d1ef05dd", 00:09:56.687 "strip_size_kb": 0, 00:09:56.687 "state": "online", 00:09:56.687 "raid_level": "raid1", 00:09:56.687 "superblock": true, 00:09:56.687 "num_base_bdevs": 3, 00:09:56.687 "num_base_bdevs_discovered": 3, 00:09:56.687 "num_base_bdevs_operational": 3, 00:09:56.687 "base_bdevs_list": [ 00:09:56.687 { 00:09:56.687 "name": "BaseBdev1", 00:09:56.687 "uuid": "7f1828cf-f061-5b86-b0dc-6d9603f776c5", 00:09:56.687 "is_configured": true, 00:09:56.687 "data_offset": 2048, 00:09:56.687 "data_size": 63488 00:09:56.687 }, 00:09:56.687 { 00:09:56.687 "name": "BaseBdev2", 00:09:56.687 "uuid": "cbd3678f-bf00-5402-860e-f505797c6fdd", 00:09:56.687 "is_configured": true, 00:09:56.687 "data_offset": 2048, 00:09:56.687 "data_size": 63488 00:09:56.687 }, 00:09:56.687 { 00:09:56.687 "name": "BaseBdev3", 00:09:56.687 "uuid": "743650fb-44aa-5ee5-8bea-75e052e93e3a", 00:09:56.687 "is_configured": true, 00:09:56.687 "data_offset": 2048, 00:09:56.687 "data_size": 63488 00:09:56.687 } 00:09:56.687 ] 00:09:56.687 }' 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.687 09:47:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.261 09:47:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:57.262 09:47:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:57.262 [2024-12-06 09:47:22.371570] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:58.200 09:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:58.200 09:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.200 09:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.200 [2024-12-06 09:47:23.298210] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:58.200 [2024-12-06 09:47:23.298352] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:58.200 [2024-12-06 09:47:23.298620] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:09:58.200 09:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.200 09:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:58.200 09:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:58.200 09:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:58.200 09:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:58.200 09:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:58.200 09:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:58.200 09:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:58.200 09:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.200 09:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.200 09:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:58.200 09:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.200 09:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.200 09:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.200 09:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.200 09:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.200 09:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:58.200 09:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.200 09:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.201 09:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.201 09:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.201 "name": "raid_bdev1", 00:09:58.201 "uuid": "6dfb3151-3fa1-4a6c-9599-fd00d1ef05dd", 00:09:58.201 "strip_size_kb": 0, 00:09:58.201 "state": "online", 00:09:58.201 "raid_level": "raid1", 00:09:58.201 "superblock": true, 00:09:58.201 "num_base_bdevs": 3, 00:09:58.201 "num_base_bdevs_discovered": 2, 00:09:58.201 "num_base_bdevs_operational": 2, 00:09:58.201 "base_bdevs_list": [ 00:09:58.201 { 00:09:58.201 "name": null, 00:09:58.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.201 "is_configured": false, 00:09:58.201 "data_offset": 0, 00:09:58.201 "data_size": 63488 00:09:58.201 }, 00:09:58.201 { 00:09:58.201 "name": "BaseBdev2", 00:09:58.201 "uuid": "cbd3678f-bf00-5402-860e-f505797c6fdd", 00:09:58.201 "is_configured": true, 00:09:58.201 "data_offset": 2048, 00:09:58.201 "data_size": 63488 00:09:58.201 }, 00:09:58.201 { 00:09:58.201 "name": "BaseBdev3", 00:09:58.201 "uuid": "743650fb-44aa-5ee5-8bea-75e052e93e3a", 00:09:58.201 "is_configured": true, 00:09:58.201 "data_offset": 2048, 00:09:58.201 "data_size": 63488 00:09:58.201 } 00:09:58.201 ] 00:09:58.201 }' 00:09:58.201 09:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.201 09:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.769 09:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:58.769 09:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.769 09:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.769 [2024-12-06 09:47:23.756659] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:58.769 [2024-12-06 09:47:23.756764] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:58.769 [2024-12-06 09:47:23.759632] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:58.769 [2024-12-06 09:47:23.759738] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.769 [2024-12-06 09:47:23.759867] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:58.769 [2024-12-06 09:47:23.759945] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:58.769 { 00:09:58.769 "results": [ 00:09:58.769 { 00:09:58.769 "job": "raid_bdev1", 00:09:58.769 "core_mask": "0x1", 00:09:58.769 "workload": "randrw", 00:09:58.769 "percentage": 50, 00:09:58.769 "status": "finished", 00:09:58.769 "queue_depth": 1, 00:09:58.769 "io_size": 131072, 00:09:58.769 "runtime": 1.386056, 00:09:58.769 "iops": 14777.180719970911, 00:09:58.769 "mibps": 1847.1475899963639, 00:09:58.769 "io_failed": 0, 00:09:58.769 "io_timeout": 0, 00:09:58.769 "avg_latency_us": 65.03147678076266, 00:09:58.769 "min_latency_us": 22.46986899563319, 00:09:58.769 "max_latency_us": 1337.907423580786 00:09:58.769 } 00:09:58.769 ], 00:09:58.769 "core_count": 1 00:09:58.769 } 00:09:58.769 09:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.769 09:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69175 00:09:58.769 09:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69175 ']' 00:09:58.769 09:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69175 00:09:58.769 09:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:58.769 09:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.769 09:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69175 00:09:58.769 09:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:58.769 09:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:58.769 09:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69175' 00:09:58.769 killing process with pid 69175 00:09:58.769 09:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69175 00:09:58.769 [2024-12-06 09:47:23.807743] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:58.769 09:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69175 00:09:58.769 [2024-12-06 09:47:24.037783] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:00.150 09:47:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:00.150 09:47:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pwaGqeBGKw 00:10:00.150 09:47:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:00.150 09:47:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:00.150 09:47:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:00.150 09:47:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:00.150 09:47:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:00.150 09:47:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:00.150 00:10:00.150 real 0m4.614s 00:10:00.150 user 0m5.500s 00:10:00.150 sys 0m0.578s 00:10:00.150 ************************************ 00:10:00.150 END TEST raid_write_error_test 00:10:00.150 ************************************ 00:10:00.150 09:47:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.150 09:47:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.150 09:47:25 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:00.150 09:47:25 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:00.150 09:47:25 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:00.150 09:47:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:00.150 09:47:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.150 09:47:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:00.150 ************************************ 00:10:00.150 START TEST raid_state_function_test 00:10:00.150 ************************************ 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69319 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69319' 00:10:00.150 Process raid pid: 69319 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69319 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69319 ']' 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.150 09:47:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.150 [2024-12-06 09:47:25.398763] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:10:00.150 [2024-12-06 09:47:25.398980] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.408 [2024-12-06 09:47:25.574492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.668 [2024-12-06 09:47:25.696817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.668 [2024-12-06 09:47:25.899725] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.668 [2024-12-06 09:47:25.899854] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:01.236 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:01.236 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:01.236 09:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:01.236 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.236 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.236 [2024-12-06 09:47:26.247123] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:01.236 [2024-12-06 09:47:26.247248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:01.236 [2024-12-06 09:47:26.247282] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:01.236 [2024-12-06 09:47:26.247307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:01.236 [2024-12-06 09:47:26.247327] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:01.236 [2024-12-06 09:47:26.247348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:01.236 [2024-12-06 09:47:26.247366] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:01.236 [2024-12-06 09:47:26.247387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:01.236 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.237 09:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:01.237 09:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.237 09:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.237 09:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:01.237 09:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.237 09:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.237 09:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.237 09:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.237 09:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.237 09:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.237 09:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.237 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.237 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.237 09:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.237 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.237 09:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.237 "name": "Existed_Raid", 00:10:01.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.237 "strip_size_kb": 64, 00:10:01.237 "state": "configuring", 00:10:01.237 "raid_level": "raid0", 00:10:01.237 "superblock": false, 00:10:01.237 "num_base_bdevs": 4, 00:10:01.237 "num_base_bdevs_discovered": 0, 00:10:01.237 "num_base_bdevs_operational": 4, 00:10:01.237 "base_bdevs_list": [ 00:10:01.237 { 00:10:01.237 "name": "BaseBdev1", 00:10:01.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.237 "is_configured": false, 00:10:01.237 "data_offset": 0, 00:10:01.237 "data_size": 0 00:10:01.237 }, 00:10:01.237 { 00:10:01.237 "name": "BaseBdev2", 00:10:01.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.237 "is_configured": false, 00:10:01.237 "data_offset": 0, 00:10:01.237 "data_size": 0 00:10:01.237 }, 00:10:01.237 { 00:10:01.237 "name": "BaseBdev3", 00:10:01.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.237 "is_configured": false, 00:10:01.237 "data_offset": 0, 00:10:01.237 "data_size": 0 00:10:01.237 }, 00:10:01.237 { 00:10:01.237 "name": "BaseBdev4", 00:10:01.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.237 "is_configured": false, 00:10:01.237 "data_offset": 0, 00:10:01.237 "data_size": 0 00:10:01.237 } 00:10:01.237 ] 00:10:01.237 }' 00:10:01.237 09:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.237 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.497 09:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:01.497 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.497 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.497 [2024-12-06 09:47:26.706292] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:01.497 [2024-12-06 09:47:26.706377] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:01.497 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.497 09:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:01.497 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.497 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.497 [2024-12-06 09:47:26.718262] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:01.497 [2024-12-06 09:47:26.718344] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:01.497 [2024-12-06 09:47:26.718372] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:01.497 [2024-12-06 09:47:26.718395] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:01.497 [2024-12-06 09:47:26.718413] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:01.497 [2024-12-06 09:47:26.718434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:01.498 [2024-12-06 09:47:26.718452] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:01.498 [2024-12-06 09:47:26.718473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:01.498 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.498 09:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:01.498 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.498 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.498 [2024-12-06 09:47:26.768676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:01.757 BaseBdev1 00:10:01.758 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.758 09:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:01.758 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:01.758 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:01.758 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:01.758 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:01.758 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:01.758 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:01.758 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.758 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.758 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.758 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:01.758 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.758 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.758 [ 00:10:01.758 { 00:10:01.758 "name": "BaseBdev1", 00:10:01.758 "aliases": [ 00:10:01.758 "75e43e05-72dd-4846-8296-4d729e58d984" 00:10:01.758 ], 00:10:01.758 "product_name": "Malloc disk", 00:10:01.758 "block_size": 512, 00:10:01.758 "num_blocks": 65536, 00:10:01.758 "uuid": "75e43e05-72dd-4846-8296-4d729e58d984", 00:10:01.758 "assigned_rate_limits": { 00:10:01.758 "rw_ios_per_sec": 0, 00:10:01.758 "rw_mbytes_per_sec": 0, 00:10:01.758 "r_mbytes_per_sec": 0, 00:10:01.758 "w_mbytes_per_sec": 0 00:10:01.758 }, 00:10:01.758 "claimed": true, 00:10:01.758 "claim_type": "exclusive_write", 00:10:01.758 "zoned": false, 00:10:01.758 "supported_io_types": { 00:10:01.758 "read": true, 00:10:01.758 "write": true, 00:10:01.758 "unmap": true, 00:10:01.758 "flush": true, 00:10:01.758 "reset": true, 00:10:01.758 "nvme_admin": false, 00:10:01.758 "nvme_io": false, 00:10:01.758 "nvme_io_md": false, 00:10:01.758 "write_zeroes": true, 00:10:01.758 "zcopy": true, 00:10:01.758 "get_zone_info": false, 00:10:01.758 "zone_management": false, 00:10:01.758 "zone_append": false, 00:10:01.758 "compare": false, 00:10:01.758 "compare_and_write": false, 00:10:01.758 "abort": true, 00:10:01.758 "seek_hole": false, 00:10:01.758 "seek_data": false, 00:10:01.758 "copy": true, 00:10:01.758 "nvme_iov_md": false 00:10:01.758 }, 00:10:01.758 "memory_domains": [ 00:10:01.758 { 00:10:01.758 "dma_device_id": "system", 00:10:01.758 "dma_device_type": 1 00:10:01.758 }, 00:10:01.758 { 00:10:01.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.758 "dma_device_type": 2 00:10:01.758 } 00:10:01.758 ], 00:10:01.758 "driver_specific": {} 00:10:01.758 } 00:10:01.758 ] 00:10:01.758 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.758 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:01.758 09:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:01.758 09:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.758 09:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.758 09:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:01.758 09:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.758 09:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.758 09:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.758 09:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.758 09:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.758 09:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.758 09:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.758 09:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.758 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.758 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.758 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.758 09:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.758 "name": "Existed_Raid", 00:10:01.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.758 "strip_size_kb": 64, 00:10:01.758 "state": "configuring", 00:10:01.758 "raid_level": "raid0", 00:10:01.758 "superblock": false, 00:10:01.758 "num_base_bdevs": 4, 00:10:01.758 "num_base_bdevs_discovered": 1, 00:10:01.758 "num_base_bdevs_operational": 4, 00:10:01.758 "base_bdevs_list": [ 00:10:01.758 { 00:10:01.758 "name": "BaseBdev1", 00:10:01.758 "uuid": "75e43e05-72dd-4846-8296-4d729e58d984", 00:10:01.758 "is_configured": true, 00:10:01.758 "data_offset": 0, 00:10:01.758 "data_size": 65536 00:10:01.758 }, 00:10:01.758 { 00:10:01.758 "name": "BaseBdev2", 00:10:01.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.758 "is_configured": false, 00:10:01.758 "data_offset": 0, 00:10:01.758 "data_size": 0 00:10:01.758 }, 00:10:01.758 { 00:10:01.758 "name": "BaseBdev3", 00:10:01.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.758 "is_configured": false, 00:10:01.758 "data_offset": 0, 00:10:01.758 "data_size": 0 00:10:01.758 }, 00:10:01.758 { 00:10:01.758 "name": "BaseBdev4", 00:10:01.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.758 "is_configured": false, 00:10:01.758 "data_offset": 0, 00:10:01.758 "data_size": 0 00:10:01.758 } 00:10:01.758 ] 00:10:01.758 }' 00:10:01.758 09:47:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.758 09:47:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.018 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:02.018 09:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.018 09:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.018 [2024-12-06 09:47:27.227941] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:02.018 [2024-12-06 09:47:27.228056] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:02.018 09:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.018 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:02.018 09:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.018 09:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.018 [2024-12-06 09:47:27.239977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:02.018 [2024-12-06 09:47:27.241968] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:02.018 [2024-12-06 09:47:27.242050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:02.018 [2024-12-06 09:47:27.242080] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:02.018 [2024-12-06 09:47:27.242106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:02.018 [2024-12-06 09:47:27.242125] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:02.018 [2024-12-06 09:47:27.242158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:02.018 09:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.018 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:02.018 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:02.018 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:02.018 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.018 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.018 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.018 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.018 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.018 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.018 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.018 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.018 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.018 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.018 09:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.019 09:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.019 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.019 09:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.279 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.279 "name": "Existed_Raid", 00:10:02.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.279 "strip_size_kb": 64, 00:10:02.279 "state": "configuring", 00:10:02.279 "raid_level": "raid0", 00:10:02.279 "superblock": false, 00:10:02.279 "num_base_bdevs": 4, 00:10:02.279 "num_base_bdevs_discovered": 1, 00:10:02.279 "num_base_bdevs_operational": 4, 00:10:02.279 "base_bdevs_list": [ 00:10:02.279 { 00:10:02.279 "name": "BaseBdev1", 00:10:02.279 "uuid": "75e43e05-72dd-4846-8296-4d729e58d984", 00:10:02.279 "is_configured": true, 00:10:02.279 "data_offset": 0, 00:10:02.279 "data_size": 65536 00:10:02.279 }, 00:10:02.279 { 00:10:02.279 "name": "BaseBdev2", 00:10:02.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.279 "is_configured": false, 00:10:02.279 "data_offset": 0, 00:10:02.279 "data_size": 0 00:10:02.279 }, 00:10:02.279 { 00:10:02.279 "name": "BaseBdev3", 00:10:02.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.279 "is_configured": false, 00:10:02.279 "data_offset": 0, 00:10:02.279 "data_size": 0 00:10:02.279 }, 00:10:02.279 { 00:10:02.279 "name": "BaseBdev4", 00:10:02.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.279 "is_configured": false, 00:10:02.279 "data_offset": 0, 00:10:02.279 "data_size": 0 00:10:02.279 } 00:10:02.279 ] 00:10:02.279 }' 00:10:02.279 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.280 09:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.545 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:02.545 09:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.545 09:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.545 [2024-12-06 09:47:27.754388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:02.545 BaseBdev2 00:10:02.545 09:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.545 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:02.545 09:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:02.545 09:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:02.545 09:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:02.545 09:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:02.545 09:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:02.545 09:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:02.545 09:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.545 09:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.545 09:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.545 09:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:02.545 09:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.545 09:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.545 [ 00:10:02.545 { 00:10:02.545 "name": "BaseBdev2", 00:10:02.545 "aliases": [ 00:10:02.545 "99820545-f24e-4c1a-acbd-1a7e6766153e" 00:10:02.545 ], 00:10:02.545 "product_name": "Malloc disk", 00:10:02.545 "block_size": 512, 00:10:02.545 "num_blocks": 65536, 00:10:02.545 "uuid": "99820545-f24e-4c1a-acbd-1a7e6766153e", 00:10:02.545 "assigned_rate_limits": { 00:10:02.545 "rw_ios_per_sec": 0, 00:10:02.545 "rw_mbytes_per_sec": 0, 00:10:02.545 "r_mbytes_per_sec": 0, 00:10:02.545 "w_mbytes_per_sec": 0 00:10:02.545 }, 00:10:02.545 "claimed": true, 00:10:02.545 "claim_type": "exclusive_write", 00:10:02.545 "zoned": false, 00:10:02.545 "supported_io_types": { 00:10:02.545 "read": true, 00:10:02.545 "write": true, 00:10:02.545 "unmap": true, 00:10:02.545 "flush": true, 00:10:02.545 "reset": true, 00:10:02.545 "nvme_admin": false, 00:10:02.545 "nvme_io": false, 00:10:02.545 "nvme_io_md": false, 00:10:02.545 "write_zeroes": true, 00:10:02.545 "zcopy": true, 00:10:02.545 "get_zone_info": false, 00:10:02.545 "zone_management": false, 00:10:02.545 "zone_append": false, 00:10:02.545 "compare": false, 00:10:02.545 "compare_and_write": false, 00:10:02.545 "abort": true, 00:10:02.545 "seek_hole": false, 00:10:02.545 "seek_data": false, 00:10:02.545 "copy": true, 00:10:02.545 "nvme_iov_md": false 00:10:02.545 }, 00:10:02.545 "memory_domains": [ 00:10:02.545 { 00:10:02.545 "dma_device_id": "system", 00:10:02.545 "dma_device_type": 1 00:10:02.545 }, 00:10:02.545 { 00:10:02.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.545 "dma_device_type": 2 00:10:02.545 } 00:10:02.545 ], 00:10:02.545 "driver_specific": {} 00:10:02.545 } 00:10:02.545 ] 00:10:02.545 09:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.545 09:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:02.545 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:02.545 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:02.545 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:02.545 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.545 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.545 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.545 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.545 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.545 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.545 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.545 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.545 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.545 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.545 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.545 09:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.545 09:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.804 09:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.804 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.804 "name": "Existed_Raid", 00:10:02.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.804 "strip_size_kb": 64, 00:10:02.804 "state": "configuring", 00:10:02.804 "raid_level": "raid0", 00:10:02.804 "superblock": false, 00:10:02.804 "num_base_bdevs": 4, 00:10:02.804 "num_base_bdevs_discovered": 2, 00:10:02.804 "num_base_bdevs_operational": 4, 00:10:02.804 "base_bdevs_list": [ 00:10:02.804 { 00:10:02.804 "name": "BaseBdev1", 00:10:02.804 "uuid": "75e43e05-72dd-4846-8296-4d729e58d984", 00:10:02.804 "is_configured": true, 00:10:02.804 "data_offset": 0, 00:10:02.804 "data_size": 65536 00:10:02.804 }, 00:10:02.804 { 00:10:02.804 "name": "BaseBdev2", 00:10:02.804 "uuid": "99820545-f24e-4c1a-acbd-1a7e6766153e", 00:10:02.804 "is_configured": true, 00:10:02.804 "data_offset": 0, 00:10:02.804 "data_size": 65536 00:10:02.804 }, 00:10:02.804 { 00:10:02.804 "name": "BaseBdev3", 00:10:02.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.804 "is_configured": false, 00:10:02.804 "data_offset": 0, 00:10:02.804 "data_size": 0 00:10:02.804 }, 00:10:02.804 { 00:10:02.804 "name": "BaseBdev4", 00:10:02.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.804 "is_configured": false, 00:10:02.804 "data_offset": 0, 00:10:02.804 "data_size": 0 00:10:02.804 } 00:10:02.804 ] 00:10:02.804 }' 00:10:02.804 09:47:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.804 09:47:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.062 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:03.062 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.062 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.062 [2024-12-06 09:47:28.283582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:03.062 BaseBdev3 00:10:03.062 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.062 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:03.062 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:03.062 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.062 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:03.062 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:03.062 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:03.062 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:03.062 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.062 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.062 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.062 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:03.062 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.062 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.062 [ 00:10:03.063 { 00:10:03.063 "name": "BaseBdev3", 00:10:03.063 "aliases": [ 00:10:03.063 "04a3d400-f1d8-41fe-9779-8dec416efc24" 00:10:03.063 ], 00:10:03.063 "product_name": "Malloc disk", 00:10:03.063 "block_size": 512, 00:10:03.063 "num_blocks": 65536, 00:10:03.063 "uuid": "04a3d400-f1d8-41fe-9779-8dec416efc24", 00:10:03.063 "assigned_rate_limits": { 00:10:03.063 "rw_ios_per_sec": 0, 00:10:03.063 "rw_mbytes_per_sec": 0, 00:10:03.063 "r_mbytes_per_sec": 0, 00:10:03.063 "w_mbytes_per_sec": 0 00:10:03.063 }, 00:10:03.063 "claimed": true, 00:10:03.063 "claim_type": "exclusive_write", 00:10:03.063 "zoned": false, 00:10:03.063 "supported_io_types": { 00:10:03.063 "read": true, 00:10:03.063 "write": true, 00:10:03.063 "unmap": true, 00:10:03.063 "flush": true, 00:10:03.063 "reset": true, 00:10:03.063 "nvme_admin": false, 00:10:03.063 "nvme_io": false, 00:10:03.063 "nvme_io_md": false, 00:10:03.063 "write_zeroes": true, 00:10:03.063 "zcopy": true, 00:10:03.063 "get_zone_info": false, 00:10:03.063 "zone_management": false, 00:10:03.063 "zone_append": false, 00:10:03.063 "compare": false, 00:10:03.063 "compare_and_write": false, 00:10:03.063 "abort": true, 00:10:03.063 "seek_hole": false, 00:10:03.063 "seek_data": false, 00:10:03.063 "copy": true, 00:10:03.063 "nvme_iov_md": false 00:10:03.063 }, 00:10:03.063 "memory_domains": [ 00:10:03.063 { 00:10:03.063 "dma_device_id": "system", 00:10:03.063 "dma_device_type": 1 00:10:03.063 }, 00:10:03.063 { 00:10:03.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.063 "dma_device_type": 2 00:10:03.063 } 00:10:03.063 ], 00:10:03.063 "driver_specific": {} 00:10:03.063 } 00:10:03.063 ] 00:10:03.063 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.063 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:03.063 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:03.063 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:03.063 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:03.063 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.063 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.063 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.063 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.063 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.063 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.063 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.063 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.063 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.063 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.063 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.063 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.063 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.322 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.322 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.322 "name": "Existed_Raid", 00:10:03.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.322 "strip_size_kb": 64, 00:10:03.322 "state": "configuring", 00:10:03.322 "raid_level": "raid0", 00:10:03.322 "superblock": false, 00:10:03.322 "num_base_bdevs": 4, 00:10:03.322 "num_base_bdevs_discovered": 3, 00:10:03.322 "num_base_bdevs_operational": 4, 00:10:03.322 "base_bdevs_list": [ 00:10:03.322 { 00:10:03.322 "name": "BaseBdev1", 00:10:03.322 "uuid": "75e43e05-72dd-4846-8296-4d729e58d984", 00:10:03.322 "is_configured": true, 00:10:03.322 "data_offset": 0, 00:10:03.322 "data_size": 65536 00:10:03.322 }, 00:10:03.322 { 00:10:03.322 "name": "BaseBdev2", 00:10:03.322 "uuid": "99820545-f24e-4c1a-acbd-1a7e6766153e", 00:10:03.322 "is_configured": true, 00:10:03.322 "data_offset": 0, 00:10:03.322 "data_size": 65536 00:10:03.322 }, 00:10:03.322 { 00:10:03.322 "name": "BaseBdev3", 00:10:03.322 "uuid": "04a3d400-f1d8-41fe-9779-8dec416efc24", 00:10:03.322 "is_configured": true, 00:10:03.322 "data_offset": 0, 00:10:03.322 "data_size": 65536 00:10:03.322 }, 00:10:03.322 { 00:10:03.322 "name": "BaseBdev4", 00:10:03.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.322 "is_configured": false, 00:10:03.322 "data_offset": 0, 00:10:03.322 "data_size": 0 00:10:03.322 } 00:10:03.322 ] 00:10:03.322 }' 00:10:03.322 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.322 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.582 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:03.582 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.582 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.582 [2024-12-06 09:47:28.749829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:03.582 [2024-12-06 09:47:28.749963] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:03.582 [2024-12-06 09:47:28.749992] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:03.582 [2024-12-06 09:47:28.750348] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:03.582 [2024-12-06 09:47:28.750589] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:03.582 [2024-12-06 09:47:28.750645] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:03.582 [2024-12-06 09:47:28.751000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:03.582 BaseBdev4 00:10:03.582 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.582 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:03.582 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:03.582 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.582 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:03.582 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:03.583 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:03.583 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:03.583 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.583 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.583 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.583 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:03.583 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.583 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.583 [ 00:10:03.583 { 00:10:03.583 "name": "BaseBdev4", 00:10:03.583 "aliases": [ 00:10:03.583 "e500d80b-162d-4a14-b93f-58222b3ff837" 00:10:03.583 ], 00:10:03.583 "product_name": "Malloc disk", 00:10:03.583 "block_size": 512, 00:10:03.583 "num_blocks": 65536, 00:10:03.583 "uuid": "e500d80b-162d-4a14-b93f-58222b3ff837", 00:10:03.583 "assigned_rate_limits": { 00:10:03.583 "rw_ios_per_sec": 0, 00:10:03.583 "rw_mbytes_per_sec": 0, 00:10:03.583 "r_mbytes_per_sec": 0, 00:10:03.583 "w_mbytes_per_sec": 0 00:10:03.583 }, 00:10:03.583 "claimed": true, 00:10:03.583 "claim_type": "exclusive_write", 00:10:03.583 "zoned": false, 00:10:03.583 "supported_io_types": { 00:10:03.583 "read": true, 00:10:03.583 "write": true, 00:10:03.583 "unmap": true, 00:10:03.583 "flush": true, 00:10:03.583 "reset": true, 00:10:03.583 "nvme_admin": false, 00:10:03.583 "nvme_io": false, 00:10:03.583 "nvme_io_md": false, 00:10:03.583 "write_zeroes": true, 00:10:03.583 "zcopy": true, 00:10:03.583 "get_zone_info": false, 00:10:03.583 "zone_management": false, 00:10:03.583 "zone_append": false, 00:10:03.583 "compare": false, 00:10:03.583 "compare_and_write": false, 00:10:03.583 "abort": true, 00:10:03.583 "seek_hole": false, 00:10:03.583 "seek_data": false, 00:10:03.583 "copy": true, 00:10:03.583 "nvme_iov_md": false 00:10:03.583 }, 00:10:03.583 "memory_domains": [ 00:10:03.583 { 00:10:03.583 "dma_device_id": "system", 00:10:03.583 "dma_device_type": 1 00:10:03.583 }, 00:10:03.583 { 00:10:03.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.583 "dma_device_type": 2 00:10:03.583 } 00:10:03.583 ], 00:10:03.583 "driver_specific": {} 00:10:03.583 } 00:10:03.583 ] 00:10:03.583 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.583 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:03.583 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:03.583 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:03.583 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:03.583 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.583 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:03.583 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.583 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.583 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.583 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.583 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.583 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.583 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.583 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.583 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.583 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.583 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.583 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.583 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.583 "name": "Existed_Raid", 00:10:03.583 "uuid": "1575531a-e9d4-44c6-8677-4020973b1c32", 00:10:03.583 "strip_size_kb": 64, 00:10:03.583 "state": "online", 00:10:03.583 "raid_level": "raid0", 00:10:03.583 "superblock": false, 00:10:03.583 "num_base_bdevs": 4, 00:10:03.583 "num_base_bdevs_discovered": 4, 00:10:03.583 "num_base_bdevs_operational": 4, 00:10:03.583 "base_bdevs_list": [ 00:10:03.583 { 00:10:03.583 "name": "BaseBdev1", 00:10:03.583 "uuid": "75e43e05-72dd-4846-8296-4d729e58d984", 00:10:03.583 "is_configured": true, 00:10:03.583 "data_offset": 0, 00:10:03.583 "data_size": 65536 00:10:03.583 }, 00:10:03.583 { 00:10:03.583 "name": "BaseBdev2", 00:10:03.583 "uuid": "99820545-f24e-4c1a-acbd-1a7e6766153e", 00:10:03.583 "is_configured": true, 00:10:03.583 "data_offset": 0, 00:10:03.583 "data_size": 65536 00:10:03.583 }, 00:10:03.583 { 00:10:03.583 "name": "BaseBdev3", 00:10:03.583 "uuid": "04a3d400-f1d8-41fe-9779-8dec416efc24", 00:10:03.583 "is_configured": true, 00:10:03.583 "data_offset": 0, 00:10:03.583 "data_size": 65536 00:10:03.583 }, 00:10:03.583 { 00:10:03.583 "name": "BaseBdev4", 00:10:03.583 "uuid": "e500d80b-162d-4a14-b93f-58222b3ff837", 00:10:03.583 "is_configured": true, 00:10:03.583 "data_offset": 0, 00:10:03.583 "data_size": 65536 00:10:03.583 } 00:10:03.583 ] 00:10:03.583 }' 00:10:03.583 09:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.583 09:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.151 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:04.151 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:04.151 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:04.151 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:04.151 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:04.151 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:04.151 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:04.151 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:04.151 09:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.151 09:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.151 [2024-12-06 09:47:29.229429] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:04.151 09:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.151 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:04.151 "name": "Existed_Raid", 00:10:04.151 "aliases": [ 00:10:04.151 "1575531a-e9d4-44c6-8677-4020973b1c32" 00:10:04.151 ], 00:10:04.151 "product_name": "Raid Volume", 00:10:04.151 "block_size": 512, 00:10:04.151 "num_blocks": 262144, 00:10:04.151 "uuid": "1575531a-e9d4-44c6-8677-4020973b1c32", 00:10:04.151 "assigned_rate_limits": { 00:10:04.151 "rw_ios_per_sec": 0, 00:10:04.151 "rw_mbytes_per_sec": 0, 00:10:04.151 "r_mbytes_per_sec": 0, 00:10:04.151 "w_mbytes_per_sec": 0 00:10:04.151 }, 00:10:04.151 "claimed": false, 00:10:04.151 "zoned": false, 00:10:04.151 "supported_io_types": { 00:10:04.151 "read": true, 00:10:04.151 "write": true, 00:10:04.151 "unmap": true, 00:10:04.151 "flush": true, 00:10:04.151 "reset": true, 00:10:04.151 "nvme_admin": false, 00:10:04.151 "nvme_io": false, 00:10:04.151 "nvme_io_md": false, 00:10:04.151 "write_zeroes": true, 00:10:04.151 "zcopy": false, 00:10:04.151 "get_zone_info": false, 00:10:04.151 "zone_management": false, 00:10:04.151 "zone_append": false, 00:10:04.151 "compare": false, 00:10:04.151 "compare_and_write": false, 00:10:04.151 "abort": false, 00:10:04.151 "seek_hole": false, 00:10:04.151 "seek_data": false, 00:10:04.151 "copy": false, 00:10:04.151 "nvme_iov_md": false 00:10:04.151 }, 00:10:04.151 "memory_domains": [ 00:10:04.151 { 00:10:04.151 "dma_device_id": "system", 00:10:04.151 "dma_device_type": 1 00:10:04.151 }, 00:10:04.151 { 00:10:04.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.151 "dma_device_type": 2 00:10:04.151 }, 00:10:04.151 { 00:10:04.151 "dma_device_id": "system", 00:10:04.151 "dma_device_type": 1 00:10:04.151 }, 00:10:04.151 { 00:10:04.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.151 "dma_device_type": 2 00:10:04.151 }, 00:10:04.151 { 00:10:04.151 "dma_device_id": "system", 00:10:04.151 "dma_device_type": 1 00:10:04.151 }, 00:10:04.151 { 00:10:04.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.151 "dma_device_type": 2 00:10:04.151 }, 00:10:04.151 { 00:10:04.151 "dma_device_id": "system", 00:10:04.151 "dma_device_type": 1 00:10:04.151 }, 00:10:04.151 { 00:10:04.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.151 "dma_device_type": 2 00:10:04.151 } 00:10:04.151 ], 00:10:04.151 "driver_specific": { 00:10:04.152 "raid": { 00:10:04.152 "uuid": "1575531a-e9d4-44c6-8677-4020973b1c32", 00:10:04.152 "strip_size_kb": 64, 00:10:04.152 "state": "online", 00:10:04.152 "raid_level": "raid0", 00:10:04.152 "superblock": false, 00:10:04.152 "num_base_bdevs": 4, 00:10:04.152 "num_base_bdevs_discovered": 4, 00:10:04.152 "num_base_bdevs_operational": 4, 00:10:04.152 "base_bdevs_list": [ 00:10:04.152 { 00:10:04.152 "name": "BaseBdev1", 00:10:04.152 "uuid": "75e43e05-72dd-4846-8296-4d729e58d984", 00:10:04.152 "is_configured": true, 00:10:04.152 "data_offset": 0, 00:10:04.152 "data_size": 65536 00:10:04.152 }, 00:10:04.152 { 00:10:04.152 "name": "BaseBdev2", 00:10:04.152 "uuid": "99820545-f24e-4c1a-acbd-1a7e6766153e", 00:10:04.152 "is_configured": true, 00:10:04.152 "data_offset": 0, 00:10:04.152 "data_size": 65536 00:10:04.152 }, 00:10:04.152 { 00:10:04.152 "name": "BaseBdev3", 00:10:04.152 "uuid": "04a3d400-f1d8-41fe-9779-8dec416efc24", 00:10:04.152 "is_configured": true, 00:10:04.152 "data_offset": 0, 00:10:04.152 "data_size": 65536 00:10:04.152 }, 00:10:04.152 { 00:10:04.152 "name": "BaseBdev4", 00:10:04.152 "uuid": "e500d80b-162d-4a14-b93f-58222b3ff837", 00:10:04.152 "is_configured": true, 00:10:04.152 "data_offset": 0, 00:10:04.152 "data_size": 65536 00:10:04.152 } 00:10:04.152 ] 00:10:04.152 } 00:10:04.152 } 00:10:04.152 }' 00:10:04.152 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:04.152 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:04.152 BaseBdev2 00:10:04.152 BaseBdev3 00:10:04.152 BaseBdev4' 00:10:04.152 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.152 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:04.152 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.152 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.152 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:04.152 09:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.152 09:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.152 09:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.152 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.152 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.152 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.152 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.152 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:04.152 09:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.152 09:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.152 09:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.152 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.152 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.152 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.152 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:04.152 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.152 09:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.152 09:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.448 [2024-12-06 09:47:29.524614] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:04.448 [2024-12-06 09:47:29.524690] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:04.448 [2024-12-06 09:47:29.524763] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.448 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.448 "name": "Existed_Raid", 00:10:04.448 "uuid": "1575531a-e9d4-44c6-8677-4020973b1c32", 00:10:04.448 "strip_size_kb": 64, 00:10:04.448 "state": "offline", 00:10:04.448 "raid_level": "raid0", 00:10:04.448 "superblock": false, 00:10:04.448 "num_base_bdevs": 4, 00:10:04.448 "num_base_bdevs_discovered": 3, 00:10:04.448 "num_base_bdevs_operational": 3, 00:10:04.448 "base_bdevs_list": [ 00:10:04.448 { 00:10:04.448 "name": null, 00:10:04.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.448 "is_configured": false, 00:10:04.448 "data_offset": 0, 00:10:04.448 "data_size": 65536 00:10:04.448 }, 00:10:04.448 { 00:10:04.448 "name": "BaseBdev2", 00:10:04.448 "uuid": "99820545-f24e-4c1a-acbd-1a7e6766153e", 00:10:04.448 "is_configured": true, 00:10:04.449 "data_offset": 0, 00:10:04.449 "data_size": 65536 00:10:04.449 }, 00:10:04.449 { 00:10:04.449 "name": "BaseBdev3", 00:10:04.449 "uuid": "04a3d400-f1d8-41fe-9779-8dec416efc24", 00:10:04.449 "is_configured": true, 00:10:04.449 "data_offset": 0, 00:10:04.449 "data_size": 65536 00:10:04.449 }, 00:10:04.449 { 00:10:04.449 "name": "BaseBdev4", 00:10:04.449 "uuid": "e500d80b-162d-4a14-b93f-58222b3ff837", 00:10:04.449 "is_configured": true, 00:10:04.449 "data_offset": 0, 00:10:04.449 "data_size": 65536 00:10:04.449 } 00:10:04.449 ] 00:10:04.449 }' 00:10:04.449 09:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.449 09:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.016 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:05.016 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:05.016 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.016 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.016 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.016 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:05.016 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.016 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:05.016 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:05.016 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:05.016 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.016 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.016 [2024-12-06 09:47:30.151316] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:05.016 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.016 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:05.016 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:05.016 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.016 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:05.016 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.016 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.016 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.275 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:05.275 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:05.275 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:05.275 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.275 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.275 [2024-12-06 09:47:30.305947] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:05.275 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.275 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:05.275 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:05.275 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:05.275 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.276 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.276 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.276 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.276 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:05.276 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:05.276 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:05.276 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.276 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.276 [2024-12-06 09:47:30.442506] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:05.276 [2024-12-06 09:47:30.442611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:05.276 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.276 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:05.276 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:05.276 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.276 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:05.276 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.535 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.535 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.535 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:05.535 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:05.535 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:05.535 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:05.535 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:05.535 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:05.535 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.535 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.535 BaseBdev2 00:10:05.535 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.535 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:05.535 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:05.535 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:05.535 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:05.535 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:05.535 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:05.535 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:05.535 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.535 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.535 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.535 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:05.535 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.535 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.535 [ 00:10:05.535 { 00:10:05.535 "name": "BaseBdev2", 00:10:05.535 "aliases": [ 00:10:05.535 "4430f8d5-1a67-458d-89dd-617bc4adcc6a" 00:10:05.535 ], 00:10:05.535 "product_name": "Malloc disk", 00:10:05.535 "block_size": 512, 00:10:05.535 "num_blocks": 65536, 00:10:05.535 "uuid": "4430f8d5-1a67-458d-89dd-617bc4adcc6a", 00:10:05.535 "assigned_rate_limits": { 00:10:05.535 "rw_ios_per_sec": 0, 00:10:05.536 "rw_mbytes_per_sec": 0, 00:10:05.536 "r_mbytes_per_sec": 0, 00:10:05.536 "w_mbytes_per_sec": 0 00:10:05.536 }, 00:10:05.536 "claimed": false, 00:10:05.536 "zoned": false, 00:10:05.536 "supported_io_types": { 00:10:05.536 "read": true, 00:10:05.536 "write": true, 00:10:05.536 "unmap": true, 00:10:05.536 "flush": true, 00:10:05.536 "reset": true, 00:10:05.536 "nvme_admin": false, 00:10:05.536 "nvme_io": false, 00:10:05.536 "nvme_io_md": false, 00:10:05.536 "write_zeroes": true, 00:10:05.536 "zcopy": true, 00:10:05.536 "get_zone_info": false, 00:10:05.536 "zone_management": false, 00:10:05.536 "zone_append": false, 00:10:05.536 "compare": false, 00:10:05.536 "compare_and_write": false, 00:10:05.536 "abort": true, 00:10:05.536 "seek_hole": false, 00:10:05.536 "seek_data": false, 00:10:05.536 "copy": true, 00:10:05.536 "nvme_iov_md": false 00:10:05.536 }, 00:10:05.536 "memory_domains": [ 00:10:05.536 { 00:10:05.536 "dma_device_id": "system", 00:10:05.536 "dma_device_type": 1 00:10:05.536 }, 00:10:05.536 { 00:10:05.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.536 "dma_device_type": 2 00:10:05.536 } 00:10:05.536 ], 00:10:05.536 "driver_specific": {} 00:10:05.536 } 00:10:05.536 ] 00:10:05.536 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.536 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:05.536 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:05.536 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:05.536 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:05.536 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.536 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.536 BaseBdev3 00:10:05.536 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.536 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:05.536 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:05.536 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:05.536 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:05.536 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:05.536 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:05.536 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:05.536 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.536 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.536 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.536 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:05.536 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.536 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.536 [ 00:10:05.536 { 00:10:05.536 "name": "BaseBdev3", 00:10:05.536 "aliases": [ 00:10:05.536 "7b80bc6f-3f01-47e9-a38c-bfbda12c069d" 00:10:05.536 ], 00:10:05.536 "product_name": "Malloc disk", 00:10:05.536 "block_size": 512, 00:10:05.536 "num_blocks": 65536, 00:10:05.536 "uuid": "7b80bc6f-3f01-47e9-a38c-bfbda12c069d", 00:10:05.536 "assigned_rate_limits": { 00:10:05.536 "rw_ios_per_sec": 0, 00:10:05.536 "rw_mbytes_per_sec": 0, 00:10:05.536 "r_mbytes_per_sec": 0, 00:10:05.536 "w_mbytes_per_sec": 0 00:10:05.536 }, 00:10:05.536 "claimed": false, 00:10:05.536 "zoned": false, 00:10:05.536 "supported_io_types": { 00:10:05.536 "read": true, 00:10:05.536 "write": true, 00:10:05.536 "unmap": true, 00:10:05.536 "flush": true, 00:10:05.536 "reset": true, 00:10:05.536 "nvme_admin": false, 00:10:05.536 "nvme_io": false, 00:10:05.536 "nvme_io_md": false, 00:10:05.536 "write_zeroes": true, 00:10:05.536 "zcopy": true, 00:10:05.536 "get_zone_info": false, 00:10:05.536 "zone_management": false, 00:10:05.536 "zone_append": false, 00:10:05.536 "compare": false, 00:10:05.536 "compare_and_write": false, 00:10:05.536 "abort": true, 00:10:05.536 "seek_hole": false, 00:10:05.536 "seek_data": false, 00:10:05.536 "copy": true, 00:10:05.536 "nvme_iov_md": false 00:10:05.536 }, 00:10:05.536 "memory_domains": [ 00:10:05.536 { 00:10:05.536 "dma_device_id": "system", 00:10:05.536 "dma_device_type": 1 00:10:05.536 }, 00:10:05.536 { 00:10:05.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.536 "dma_device_type": 2 00:10:05.536 } 00:10:05.536 ], 00:10:05.536 "driver_specific": {} 00:10:05.536 } 00:10:05.536 ] 00:10:05.536 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.536 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:05.536 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:05.536 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:05.536 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:05.536 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.536 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.793 BaseBdev4 00:10:05.793 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.793 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:05.793 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:05.793 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:05.793 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:05.793 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:05.793 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:05.793 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:05.793 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.793 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.793 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.793 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:05.793 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.794 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.794 [ 00:10:05.794 { 00:10:05.794 "name": "BaseBdev4", 00:10:05.794 "aliases": [ 00:10:05.794 "60825174-8eba-4c0b-b8ef-352bbd2be457" 00:10:05.794 ], 00:10:05.794 "product_name": "Malloc disk", 00:10:05.794 "block_size": 512, 00:10:05.794 "num_blocks": 65536, 00:10:05.794 "uuid": "60825174-8eba-4c0b-b8ef-352bbd2be457", 00:10:05.794 "assigned_rate_limits": { 00:10:05.794 "rw_ios_per_sec": 0, 00:10:05.794 "rw_mbytes_per_sec": 0, 00:10:05.794 "r_mbytes_per_sec": 0, 00:10:05.794 "w_mbytes_per_sec": 0 00:10:05.794 }, 00:10:05.794 "claimed": false, 00:10:05.794 "zoned": false, 00:10:05.794 "supported_io_types": { 00:10:05.794 "read": true, 00:10:05.794 "write": true, 00:10:05.794 "unmap": true, 00:10:05.794 "flush": true, 00:10:05.794 "reset": true, 00:10:05.794 "nvme_admin": false, 00:10:05.794 "nvme_io": false, 00:10:05.794 "nvme_io_md": false, 00:10:05.794 "write_zeroes": true, 00:10:05.794 "zcopy": true, 00:10:05.794 "get_zone_info": false, 00:10:05.794 "zone_management": false, 00:10:05.794 "zone_append": false, 00:10:05.794 "compare": false, 00:10:05.794 "compare_and_write": false, 00:10:05.794 "abort": true, 00:10:05.794 "seek_hole": false, 00:10:05.794 "seek_data": false, 00:10:05.794 "copy": true, 00:10:05.794 "nvme_iov_md": false 00:10:05.794 }, 00:10:05.794 "memory_domains": [ 00:10:05.794 { 00:10:05.794 "dma_device_id": "system", 00:10:05.794 "dma_device_type": 1 00:10:05.794 }, 00:10:05.794 { 00:10:05.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.794 "dma_device_type": 2 00:10:05.794 } 00:10:05.794 ], 00:10:05.794 "driver_specific": {} 00:10:05.794 } 00:10:05.794 ] 00:10:05.794 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.794 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:05.794 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:05.794 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:05.794 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:05.794 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.794 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.794 [2024-12-06 09:47:30.850262] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:05.794 [2024-12-06 09:47:30.850353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:05.794 [2024-12-06 09:47:30.850418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:05.794 [2024-12-06 09:47:30.852283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:05.794 [2024-12-06 09:47:30.852377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:05.794 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.794 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:05.794 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.794 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.794 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.794 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.794 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.794 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.794 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.794 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.794 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.794 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.794 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.794 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.794 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.794 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.794 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.794 "name": "Existed_Raid", 00:10:05.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.794 "strip_size_kb": 64, 00:10:05.794 "state": "configuring", 00:10:05.794 "raid_level": "raid0", 00:10:05.794 "superblock": false, 00:10:05.794 "num_base_bdevs": 4, 00:10:05.794 "num_base_bdevs_discovered": 3, 00:10:05.794 "num_base_bdevs_operational": 4, 00:10:05.794 "base_bdevs_list": [ 00:10:05.794 { 00:10:05.794 "name": "BaseBdev1", 00:10:05.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.794 "is_configured": false, 00:10:05.794 "data_offset": 0, 00:10:05.794 "data_size": 0 00:10:05.794 }, 00:10:05.794 { 00:10:05.794 "name": "BaseBdev2", 00:10:05.794 "uuid": "4430f8d5-1a67-458d-89dd-617bc4adcc6a", 00:10:05.794 "is_configured": true, 00:10:05.794 "data_offset": 0, 00:10:05.794 "data_size": 65536 00:10:05.794 }, 00:10:05.794 { 00:10:05.794 "name": "BaseBdev3", 00:10:05.794 "uuid": "7b80bc6f-3f01-47e9-a38c-bfbda12c069d", 00:10:05.794 "is_configured": true, 00:10:05.794 "data_offset": 0, 00:10:05.794 "data_size": 65536 00:10:05.794 }, 00:10:05.794 { 00:10:05.794 "name": "BaseBdev4", 00:10:05.794 "uuid": "60825174-8eba-4c0b-b8ef-352bbd2be457", 00:10:05.794 "is_configured": true, 00:10:05.794 "data_offset": 0, 00:10:05.794 "data_size": 65536 00:10:05.794 } 00:10:05.794 ] 00:10:05.794 }' 00:10:05.794 09:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.794 09:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.052 09:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:06.052 09:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.052 09:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.309 [2024-12-06 09:47:31.329448] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:06.309 09:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.309 09:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:06.309 09:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.309 09:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.309 09:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.309 09:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.309 09:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.309 09:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.309 09:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.309 09:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.309 09:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.309 09:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.310 09:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.310 09:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.310 09:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.310 09:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.310 09:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.310 "name": "Existed_Raid", 00:10:06.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.310 "strip_size_kb": 64, 00:10:06.310 "state": "configuring", 00:10:06.310 "raid_level": "raid0", 00:10:06.310 "superblock": false, 00:10:06.310 "num_base_bdevs": 4, 00:10:06.310 "num_base_bdevs_discovered": 2, 00:10:06.310 "num_base_bdevs_operational": 4, 00:10:06.310 "base_bdevs_list": [ 00:10:06.310 { 00:10:06.310 "name": "BaseBdev1", 00:10:06.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.310 "is_configured": false, 00:10:06.310 "data_offset": 0, 00:10:06.310 "data_size": 0 00:10:06.310 }, 00:10:06.310 { 00:10:06.310 "name": null, 00:10:06.310 "uuid": "4430f8d5-1a67-458d-89dd-617bc4adcc6a", 00:10:06.310 "is_configured": false, 00:10:06.310 "data_offset": 0, 00:10:06.310 "data_size": 65536 00:10:06.310 }, 00:10:06.310 { 00:10:06.310 "name": "BaseBdev3", 00:10:06.310 "uuid": "7b80bc6f-3f01-47e9-a38c-bfbda12c069d", 00:10:06.310 "is_configured": true, 00:10:06.310 "data_offset": 0, 00:10:06.310 "data_size": 65536 00:10:06.310 }, 00:10:06.310 { 00:10:06.310 "name": "BaseBdev4", 00:10:06.310 "uuid": "60825174-8eba-4c0b-b8ef-352bbd2be457", 00:10:06.310 "is_configured": true, 00:10:06.310 "data_offset": 0, 00:10:06.310 "data_size": 65536 00:10:06.310 } 00:10:06.310 ] 00:10:06.310 }' 00:10:06.310 09:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.310 09:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.569 09:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:06.569 09:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.569 09:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.569 09:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.569 09:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.569 09:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:06.569 09:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:06.569 09:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.569 09:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.829 [2024-12-06 09:47:31.853909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:06.829 BaseBdev1 00:10:06.829 09:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.829 09:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:06.829 09:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:06.829 09:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.829 09:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:06.829 09:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.829 09:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.829 09:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.829 09:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.830 09:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.830 09:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.830 09:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:06.830 09:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.830 09:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.830 [ 00:10:06.830 { 00:10:06.830 "name": "BaseBdev1", 00:10:06.830 "aliases": [ 00:10:06.830 "7a5eab6e-dac0-4209-898c-48f8732df482" 00:10:06.830 ], 00:10:06.830 "product_name": "Malloc disk", 00:10:06.830 "block_size": 512, 00:10:06.830 "num_blocks": 65536, 00:10:06.830 "uuid": "7a5eab6e-dac0-4209-898c-48f8732df482", 00:10:06.830 "assigned_rate_limits": { 00:10:06.830 "rw_ios_per_sec": 0, 00:10:06.830 "rw_mbytes_per_sec": 0, 00:10:06.830 "r_mbytes_per_sec": 0, 00:10:06.830 "w_mbytes_per_sec": 0 00:10:06.830 }, 00:10:06.830 "claimed": true, 00:10:06.830 "claim_type": "exclusive_write", 00:10:06.830 "zoned": false, 00:10:06.830 "supported_io_types": { 00:10:06.830 "read": true, 00:10:06.830 "write": true, 00:10:06.830 "unmap": true, 00:10:06.830 "flush": true, 00:10:06.830 "reset": true, 00:10:06.830 "nvme_admin": false, 00:10:06.830 "nvme_io": false, 00:10:06.830 "nvme_io_md": false, 00:10:06.830 "write_zeroes": true, 00:10:06.830 "zcopy": true, 00:10:06.830 "get_zone_info": false, 00:10:06.830 "zone_management": false, 00:10:06.830 "zone_append": false, 00:10:06.830 "compare": false, 00:10:06.830 "compare_and_write": false, 00:10:06.830 "abort": true, 00:10:06.830 "seek_hole": false, 00:10:06.830 "seek_data": false, 00:10:06.830 "copy": true, 00:10:06.830 "nvme_iov_md": false 00:10:06.830 }, 00:10:06.830 "memory_domains": [ 00:10:06.830 { 00:10:06.830 "dma_device_id": "system", 00:10:06.830 "dma_device_type": 1 00:10:06.830 }, 00:10:06.830 { 00:10:06.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.830 "dma_device_type": 2 00:10:06.830 } 00:10:06.830 ], 00:10:06.830 "driver_specific": {} 00:10:06.830 } 00:10:06.830 ] 00:10:06.830 09:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.830 09:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:06.830 09:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:06.830 09:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.830 09:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.830 09:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.830 09:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.830 09:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.830 09:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.830 09:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.830 09:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.830 09:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.830 09:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.830 09:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.830 09:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.830 09:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.830 09:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.830 09:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.830 "name": "Existed_Raid", 00:10:06.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.830 "strip_size_kb": 64, 00:10:06.830 "state": "configuring", 00:10:06.830 "raid_level": "raid0", 00:10:06.830 "superblock": false, 00:10:06.830 "num_base_bdevs": 4, 00:10:06.830 "num_base_bdevs_discovered": 3, 00:10:06.830 "num_base_bdevs_operational": 4, 00:10:06.830 "base_bdevs_list": [ 00:10:06.830 { 00:10:06.830 "name": "BaseBdev1", 00:10:06.830 "uuid": "7a5eab6e-dac0-4209-898c-48f8732df482", 00:10:06.830 "is_configured": true, 00:10:06.830 "data_offset": 0, 00:10:06.830 "data_size": 65536 00:10:06.830 }, 00:10:06.830 { 00:10:06.830 "name": null, 00:10:06.830 "uuid": "4430f8d5-1a67-458d-89dd-617bc4adcc6a", 00:10:06.830 "is_configured": false, 00:10:06.830 "data_offset": 0, 00:10:06.830 "data_size": 65536 00:10:06.830 }, 00:10:06.830 { 00:10:06.830 "name": "BaseBdev3", 00:10:06.830 "uuid": "7b80bc6f-3f01-47e9-a38c-bfbda12c069d", 00:10:06.830 "is_configured": true, 00:10:06.830 "data_offset": 0, 00:10:06.830 "data_size": 65536 00:10:06.830 }, 00:10:06.830 { 00:10:06.830 "name": "BaseBdev4", 00:10:06.830 "uuid": "60825174-8eba-4c0b-b8ef-352bbd2be457", 00:10:06.830 "is_configured": true, 00:10:06.830 "data_offset": 0, 00:10:06.830 "data_size": 65536 00:10:06.830 } 00:10:06.830 ] 00:10:06.830 }' 00:10:06.830 09:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.830 09:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.090 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:07.350 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.350 09:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.350 09:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.350 09:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.350 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:07.350 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:07.350 09:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.350 09:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.350 [2024-12-06 09:47:32.413082] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:07.350 09:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.350 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:07.350 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.350 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.350 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.350 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.350 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.350 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.350 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.350 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.350 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.350 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.350 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.350 09:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.350 09:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.350 09:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.350 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.350 "name": "Existed_Raid", 00:10:07.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.350 "strip_size_kb": 64, 00:10:07.350 "state": "configuring", 00:10:07.350 "raid_level": "raid0", 00:10:07.350 "superblock": false, 00:10:07.350 "num_base_bdevs": 4, 00:10:07.350 "num_base_bdevs_discovered": 2, 00:10:07.350 "num_base_bdevs_operational": 4, 00:10:07.350 "base_bdevs_list": [ 00:10:07.350 { 00:10:07.350 "name": "BaseBdev1", 00:10:07.350 "uuid": "7a5eab6e-dac0-4209-898c-48f8732df482", 00:10:07.350 "is_configured": true, 00:10:07.350 "data_offset": 0, 00:10:07.350 "data_size": 65536 00:10:07.350 }, 00:10:07.350 { 00:10:07.350 "name": null, 00:10:07.350 "uuid": "4430f8d5-1a67-458d-89dd-617bc4adcc6a", 00:10:07.350 "is_configured": false, 00:10:07.350 "data_offset": 0, 00:10:07.350 "data_size": 65536 00:10:07.350 }, 00:10:07.350 { 00:10:07.350 "name": null, 00:10:07.350 "uuid": "7b80bc6f-3f01-47e9-a38c-bfbda12c069d", 00:10:07.350 "is_configured": false, 00:10:07.350 "data_offset": 0, 00:10:07.350 "data_size": 65536 00:10:07.350 }, 00:10:07.350 { 00:10:07.350 "name": "BaseBdev4", 00:10:07.350 "uuid": "60825174-8eba-4c0b-b8ef-352bbd2be457", 00:10:07.350 "is_configured": true, 00:10:07.350 "data_offset": 0, 00:10:07.350 "data_size": 65536 00:10:07.350 } 00:10:07.350 ] 00:10:07.350 }' 00:10:07.350 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.350 09:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.611 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:07.611 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.611 09:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.611 09:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.611 09:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.870 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:07.870 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:07.871 09:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.871 09:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.871 [2024-12-06 09:47:32.896255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:07.871 09:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.871 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:07.871 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.871 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.871 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.871 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.871 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.871 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.871 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.871 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.871 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.871 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.871 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.871 09:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.871 09:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.871 09:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.871 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.871 "name": "Existed_Raid", 00:10:07.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.871 "strip_size_kb": 64, 00:10:07.871 "state": "configuring", 00:10:07.871 "raid_level": "raid0", 00:10:07.871 "superblock": false, 00:10:07.871 "num_base_bdevs": 4, 00:10:07.871 "num_base_bdevs_discovered": 3, 00:10:07.871 "num_base_bdevs_operational": 4, 00:10:07.871 "base_bdevs_list": [ 00:10:07.871 { 00:10:07.871 "name": "BaseBdev1", 00:10:07.871 "uuid": "7a5eab6e-dac0-4209-898c-48f8732df482", 00:10:07.871 "is_configured": true, 00:10:07.871 "data_offset": 0, 00:10:07.871 "data_size": 65536 00:10:07.871 }, 00:10:07.871 { 00:10:07.871 "name": null, 00:10:07.871 "uuid": "4430f8d5-1a67-458d-89dd-617bc4adcc6a", 00:10:07.871 "is_configured": false, 00:10:07.871 "data_offset": 0, 00:10:07.871 "data_size": 65536 00:10:07.871 }, 00:10:07.871 { 00:10:07.871 "name": "BaseBdev3", 00:10:07.871 "uuid": "7b80bc6f-3f01-47e9-a38c-bfbda12c069d", 00:10:07.871 "is_configured": true, 00:10:07.871 "data_offset": 0, 00:10:07.871 "data_size": 65536 00:10:07.871 }, 00:10:07.871 { 00:10:07.871 "name": "BaseBdev4", 00:10:07.871 "uuid": "60825174-8eba-4c0b-b8ef-352bbd2be457", 00:10:07.871 "is_configured": true, 00:10:07.871 "data_offset": 0, 00:10:07.871 "data_size": 65536 00:10:07.871 } 00:10:07.871 ] 00:10:07.871 }' 00:10:07.871 09:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.871 09:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.171 09:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.171 09:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:08.171 09:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.171 09:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.171 09:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.171 09:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:08.171 09:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:08.171 09:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.171 09:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.171 [2024-12-06 09:47:33.403430] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:08.431 09:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.431 09:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:08.431 09:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.431 09:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.431 09:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.431 09:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.431 09:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.431 09:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.431 09:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.431 09:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.431 09:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.431 09:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.431 09:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.431 09:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.431 09:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.431 09:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.432 09:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.432 "name": "Existed_Raid", 00:10:08.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.432 "strip_size_kb": 64, 00:10:08.432 "state": "configuring", 00:10:08.432 "raid_level": "raid0", 00:10:08.432 "superblock": false, 00:10:08.432 "num_base_bdevs": 4, 00:10:08.432 "num_base_bdevs_discovered": 2, 00:10:08.432 "num_base_bdevs_operational": 4, 00:10:08.432 "base_bdevs_list": [ 00:10:08.432 { 00:10:08.432 "name": null, 00:10:08.432 "uuid": "7a5eab6e-dac0-4209-898c-48f8732df482", 00:10:08.432 "is_configured": false, 00:10:08.432 "data_offset": 0, 00:10:08.432 "data_size": 65536 00:10:08.432 }, 00:10:08.432 { 00:10:08.432 "name": null, 00:10:08.432 "uuid": "4430f8d5-1a67-458d-89dd-617bc4adcc6a", 00:10:08.432 "is_configured": false, 00:10:08.432 "data_offset": 0, 00:10:08.432 "data_size": 65536 00:10:08.432 }, 00:10:08.432 { 00:10:08.432 "name": "BaseBdev3", 00:10:08.432 "uuid": "7b80bc6f-3f01-47e9-a38c-bfbda12c069d", 00:10:08.432 "is_configured": true, 00:10:08.432 "data_offset": 0, 00:10:08.432 "data_size": 65536 00:10:08.432 }, 00:10:08.432 { 00:10:08.432 "name": "BaseBdev4", 00:10:08.432 "uuid": "60825174-8eba-4c0b-b8ef-352bbd2be457", 00:10:08.432 "is_configured": true, 00:10:08.432 "data_offset": 0, 00:10:08.432 "data_size": 65536 00:10:08.432 } 00:10:08.432 ] 00:10:08.432 }' 00:10:08.432 09:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.432 09:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.692 09:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.692 09:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.692 09:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.692 09:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:08.692 09:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.692 09:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:08.692 09:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:08.692 09:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.692 09:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.952 [2024-12-06 09:47:33.968152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:08.952 09:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.952 09:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:08.952 09:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.952 09:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.952 09:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.952 09:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.952 09:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.952 09:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.952 09:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.952 09:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.952 09:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.952 09:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.952 09:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.952 09:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.952 09:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.952 09:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.952 09:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.952 "name": "Existed_Raid", 00:10:08.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.953 "strip_size_kb": 64, 00:10:08.953 "state": "configuring", 00:10:08.953 "raid_level": "raid0", 00:10:08.953 "superblock": false, 00:10:08.953 "num_base_bdevs": 4, 00:10:08.953 "num_base_bdevs_discovered": 3, 00:10:08.953 "num_base_bdevs_operational": 4, 00:10:08.953 "base_bdevs_list": [ 00:10:08.953 { 00:10:08.953 "name": null, 00:10:08.953 "uuid": "7a5eab6e-dac0-4209-898c-48f8732df482", 00:10:08.953 "is_configured": false, 00:10:08.953 "data_offset": 0, 00:10:08.953 "data_size": 65536 00:10:08.953 }, 00:10:08.953 { 00:10:08.953 "name": "BaseBdev2", 00:10:08.953 "uuid": "4430f8d5-1a67-458d-89dd-617bc4adcc6a", 00:10:08.953 "is_configured": true, 00:10:08.953 "data_offset": 0, 00:10:08.953 "data_size": 65536 00:10:08.953 }, 00:10:08.953 { 00:10:08.953 "name": "BaseBdev3", 00:10:08.953 "uuid": "7b80bc6f-3f01-47e9-a38c-bfbda12c069d", 00:10:08.953 "is_configured": true, 00:10:08.953 "data_offset": 0, 00:10:08.953 "data_size": 65536 00:10:08.953 }, 00:10:08.953 { 00:10:08.953 "name": "BaseBdev4", 00:10:08.953 "uuid": "60825174-8eba-4c0b-b8ef-352bbd2be457", 00:10:08.953 "is_configured": true, 00:10:08.953 "data_offset": 0, 00:10:08.953 "data_size": 65536 00:10:08.953 } 00:10:08.953 ] 00:10:08.953 }' 00:10:08.953 09:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.953 09:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.212 09:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.212 09:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:09.212 09:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.212 09:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.212 09:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.212 09:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:09.212 09:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:09.212 09:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.212 09:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.212 09:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7a5eab6e-dac0-4209-898c-48f8732df482 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.473 [2024-12-06 09:47:34.551081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:09.473 [2024-12-06 09:47:34.551247] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:09.473 [2024-12-06 09:47:34.551274] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:09.473 [2024-12-06 09:47:34.551579] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:09.473 [2024-12-06 09:47:34.551778] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:09.473 [2024-12-06 09:47:34.551822] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:09.473 [2024-12-06 09:47:34.552099] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:09.473 NewBaseBdev 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.473 [ 00:10:09.473 { 00:10:09.473 "name": "NewBaseBdev", 00:10:09.473 "aliases": [ 00:10:09.473 "7a5eab6e-dac0-4209-898c-48f8732df482" 00:10:09.473 ], 00:10:09.473 "product_name": "Malloc disk", 00:10:09.473 "block_size": 512, 00:10:09.473 "num_blocks": 65536, 00:10:09.473 "uuid": "7a5eab6e-dac0-4209-898c-48f8732df482", 00:10:09.473 "assigned_rate_limits": { 00:10:09.473 "rw_ios_per_sec": 0, 00:10:09.473 "rw_mbytes_per_sec": 0, 00:10:09.473 "r_mbytes_per_sec": 0, 00:10:09.473 "w_mbytes_per_sec": 0 00:10:09.473 }, 00:10:09.473 "claimed": true, 00:10:09.473 "claim_type": "exclusive_write", 00:10:09.473 "zoned": false, 00:10:09.473 "supported_io_types": { 00:10:09.473 "read": true, 00:10:09.473 "write": true, 00:10:09.473 "unmap": true, 00:10:09.473 "flush": true, 00:10:09.473 "reset": true, 00:10:09.473 "nvme_admin": false, 00:10:09.473 "nvme_io": false, 00:10:09.473 "nvme_io_md": false, 00:10:09.473 "write_zeroes": true, 00:10:09.473 "zcopy": true, 00:10:09.473 "get_zone_info": false, 00:10:09.473 "zone_management": false, 00:10:09.473 "zone_append": false, 00:10:09.473 "compare": false, 00:10:09.473 "compare_and_write": false, 00:10:09.473 "abort": true, 00:10:09.473 "seek_hole": false, 00:10:09.473 "seek_data": false, 00:10:09.473 "copy": true, 00:10:09.473 "nvme_iov_md": false 00:10:09.473 }, 00:10:09.473 "memory_domains": [ 00:10:09.473 { 00:10:09.473 "dma_device_id": "system", 00:10:09.473 "dma_device_type": 1 00:10:09.473 }, 00:10:09.473 { 00:10:09.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.473 "dma_device_type": 2 00:10:09.473 } 00:10:09.473 ], 00:10:09.473 "driver_specific": {} 00:10:09.473 } 00:10:09.473 ] 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.473 09:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.473 "name": "Existed_Raid", 00:10:09.473 "uuid": "5e49eaa9-058a-4bde-9980-68f35fc5ef90", 00:10:09.473 "strip_size_kb": 64, 00:10:09.473 "state": "online", 00:10:09.473 "raid_level": "raid0", 00:10:09.473 "superblock": false, 00:10:09.473 "num_base_bdevs": 4, 00:10:09.473 "num_base_bdevs_discovered": 4, 00:10:09.473 "num_base_bdevs_operational": 4, 00:10:09.473 "base_bdevs_list": [ 00:10:09.473 { 00:10:09.473 "name": "NewBaseBdev", 00:10:09.473 "uuid": "7a5eab6e-dac0-4209-898c-48f8732df482", 00:10:09.473 "is_configured": true, 00:10:09.473 "data_offset": 0, 00:10:09.473 "data_size": 65536 00:10:09.473 }, 00:10:09.473 { 00:10:09.473 "name": "BaseBdev2", 00:10:09.473 "uuid": "4430f8d5-1a67-458d-89dd-617bc4adcc6a", 00:10:09.473 "is_configured": true, 00:10:09.473 "data_offset": 0, 00:10:09.473 "data_size": 65536 00:10:09.473 }, 00:10:09.473 { 00:10:09.473 "name": "BaseBdev3", 00:10:09.473 "uuid": "7b80bc6f-3f01-47e9-a38c-bfbda12c069d", 00:10:09.473 "is_configured": true, 00:10:09.473 "data_offset": 0, 00:10:09.473 "data_size": 65536 00:10:09.473 }, 00:10:09.473 { 00:10:09.473 "name": "BaseBdev4", 00:10:09.473 "uuid": "60825174-8eba-4c0b-b8ef-352bbd2be457", 00:10:09.473 "is_configured": true, 00:10:09.473 "data_offset": 0, 00:10:09.473 "data_size": 65536 00:10:09.474 } 00:10:09.474 ] 00:10:09.474 }' 00:10:09.474 09:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.474 09:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.040 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:10.040 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:10.040 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:10.040 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:10.040 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:10.040 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:10.040 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:10.040 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:10.040 09:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.040 09:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.040 [2024-12-06 09:47:35.038633] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.040 09:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.040 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:10.040 "name": "Existed_Raid", 00:10:10.040 "aliases": [ 00:10:10.040 "5e49eaa9-058a-4bde-9980-68f35fc5ef90" 00:10:10.040 ], 00:10:10.040 "product_name": "Raid Volume", 00:10:10.040 "block_size": 512, 00:10:10.040 "num_blocks": 262144, 00:10:10.040 "uuid": "5e49eaa9-058a-4bde-9980-68f35fc5ef90", 00:10:10.040 "assigned_rate_limits": { 00:10:10.040 "rw_ios_per_sec": 0, 00:10:10.040 "rw_mbytes_per_sec": 0, 00:10:10.040 "r_mbytes_per_sec": 0, 00:10:10.040 "w_mbytes_per_sec": 0 00:10:10.040 }, 00:10:10.040 "claimed": false, 00:10:10.040 "zoned": false, 00:10:10.040 "supported_io_types": { 00:10:10.040 "read": true, 00:10:10.040 "write": true, 00:10:10.040 "unmap": true, 00:10:10.040 "flush": true, 00:10:10.040 "reset": true, 00:10:10.040 "nvme_admin": false, 00:10:10.040 "nvme_io": false, 00:10:10.040 "nvme_io_md": false, 00:10:10.040 "write_zeroes": true, 00:10:10.040 "zcopy": false, 00:10:10.040 "get_zone_info": false, 00:10:10.040 "zone_management": false, 00:10:10.040 "zone_append": false, 00:10:10.040 "compare": false, 00:10:10.040 "compare_and_write": false, 00:10:10.040 "abort": false, 00:10:10.040 "seek_hole": false, 00:10:10.040 "seek_data": false, 00:10:10.040 "copy": false, 00:10:10.040 "nvme_iov_md": false 00:10:10.040 }, 00:10:10.040 "memory_domains": [ 00:10:10.040 { 00:10:10.040 "dma_device_id": "system", 00:10:10.040 "dma_device_type": 1 00:10:10.040 }, 00:10:10.040 { 00:10:10.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.040 "dma_device_type": 2 00:10:10.040 }, 00:10:10.040 { 00:10:10.040 "dma_device_id": "system", 00:10:10.040 "dma_device_type": 1 00:10:10.040 }, 00:10:10.040 { 00:10:10.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.040 "dma_device_type": 2 00:10:10.040 }, 00:10:10.040 { 00:10:10.040 "dma_device_id": "system", 00:10:10.040 "dma_device_type": 1 00:10:10.040 }, 00:10:10.041 { 00:10:10.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.041 "dma_device_type": 2 00:10:10.041 }, 00:10:10.041 { 00:10:10.041 "dma_device_id": "system", 00:10:10.041 "dma_device_type": 1 00:10:10.041 }, 00:10:10.041 { 00:10:10.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.041 "dma_device_type": 2 00:10:10.041 } 00:10:10.041 ], 00:10:10.041 "driver_specific": { 00:10:10.041 "raid": { 00:10:10.041 "uuid": "5e49eaa9-058a-4bde-9980-68f35fc5ef90", 00:10:10.041 "strip_size_kb": 64, 00:10:10.041 "state": "online", 00:10:10.041 "raid_level": "raid0", 00:10:10.041 "superblock": false, 00:10:10.041 "num_base_bdevs": 4, 00:10:10.041 "num_base_bdevs_discovered": 4, 00:10:10.041 "num_base_bdevs_operational": 4, 00:10:10.041 "base_bdevs_list": [ 00:10:10.041 { 00:10:10.041 "name": "NewBaseBdev", 00:10:10.041 "uuid": "7a5eab6e-dac0-4209-898c-48f8732df482", 00:10:10.041 "is_configured": true, 00:10:10.041 "data_offset": 0, 00:10:10.041 "data_size": 65536 00:10:10.041 }, 00:10:10.041 { 00:10:10.041 "name": "BaseBdev2", 00:10:10.041 "uuid": "4430f8d5-1a67-458d-89dd-617bc4adcc6a", 00:10:10.041 "is_configured": true, 00:10:10.041 "data_offset": 0, 00:10:10.041 "data_size": 65536 00:10:10.041 }, 00:10:10.041 { 00:10:10.041 "name": "BaseBdev3", 00:10:10.041 "uuid": "7b80bc6f-3f01-47e9-a38c-bfbda12c069d", 00:10:10.041 "is_configured": true, 00:10:10.041 "data_offset": 0, 00:10:10.041 "data_size": 65536 00:10:10.041 }, 00:10:10.041 { 00:10:10.041 "name": "BaseBdev4", 00:10:10.041 "uuid": "60825174-8eba-4c0b-b8ef-352bbd2be457", 00:10:10.041 "is_configured": true, 00:10:10.041 "data_offset": 0, 00:10:10.041 "data_size": 65536 00:10:10.041 } 00:10:10.041 ] 00:10:10.041 } 00:10:10.041 } 00:10:10.041 }' 00:10:10.041 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:10.041 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:10.041 BaseBdev2 00:10:10.041 BaseBdev3 00:10:10.041 BaseBdev4' 00:10:10.041 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.041 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:10.041 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.041 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:10.041 09:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.041 09:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.041 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.041 09:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.041 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.041 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.041 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.041 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:10.041 09:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.041 09:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.041 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.041 09:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.041 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.041 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.041 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.041 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:10.041 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.041 09:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.041 09:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.041 09:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.300 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.300 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.300 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.300 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.300 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:10.300 09:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.300 09:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.300 09:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.300 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.300 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.300 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:10.300 09:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.300 09:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.300 [2024-12-06 09:47:35.377716] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:10.300 [2024-12-06 09:47:35.377786] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:10.300 [2024-12-06 09:47:35.377890] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:10.300 [2024-12-06 09:47:35.377992] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:10.300 [2024-12-06 09:47:35.378049] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:10.300 09:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.300 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69319 00:10:10.300 09:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69319 ']' 00:10:10.300 09:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69319 00:10:10.300 09:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:10.300 09:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:10.300 09:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69319 00:10:10.300 09:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:10.300 09:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:10.300 09:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69319' 00:10:10.300 killing process with pid 69319 00:10:10.300 09:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69319 00:10:10.300 [2024-12-06 09:47:35.425498] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:10.300 09:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69319 00:10:10.558 [2024-12-06 09:47:35.826812] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:11.933 09:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:11.933 00:10:11.933 real 0m11.654s 00:10:11.933 user 0m18.556s 00:10:11.933 sys 0m2.011s 00:10:11.933 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.933 ************************************ 00:10:11.933 END TEST raid_state_function_test 00:10:11.933 ************************************ 00:10:11.933 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.933 09:47:37 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:11.933 09:47:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:11.933 09:47:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.933 09:47:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:11.933 ************************************ 00:10:11.933 START TEST raid_state_function_test_sb 00:10:11.933 ************************************ 00:10:11.933 09:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:10:11.933 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:11.933 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:11.933 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:11.933 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:11.934 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:11.934 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:11.934 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:11.934 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:11.934 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:11.934 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:11.934 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:11.934 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:11.934 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:11.934 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:11.934 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:11.934 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:11.934 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:11.934 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:11.934 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:11.934 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:11.934 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:11.934 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:11.934 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:11.934 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:11.934 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:11.934 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:11.934 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:11.934 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:11.934 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:11.934 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=69993 00:10:11.934 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:11.934 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69993' 00:10:11.934 Process raid pid: 69993 00:10:11.934 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 69993 00:10:11.934 09:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 69993 ']' 00:10:11.934 09:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.934 09:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:11.934 09:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.934 09:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:11.934 09:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.934 [2024-12-06 09:47:37.123573] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:10:11.934 [2024-12-06 09:47:37.123791] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.192 [2024-12-06 09:47:37.297016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.192 [2024-12-06 09:47:37.411761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.450 [2024-12-06 09:47:37.623153] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:12.450 [2024-12-06 09:47:37.623188] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:12.710 09:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.710 09:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:12.710 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:12.710 09:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.710 09:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.970 [2024-12-06 09:47:37.982773] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:12.970 [2024-12-06 09:47:37.982913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:12.970 [2024-12-06 09:47:37.982948] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:12.970 [2024-12-06 09:47:37.982973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:12.970 [2024-12-06 09:47:37.982992] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:12.970 [2024-12-06 09:47:37.983013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:12.970 [2024-12-06 09:47:37.983031] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:12.970 [2024-12-06 09:47:37.983068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:12.970 09:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.970 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:12.970 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.970 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.970 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.970 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.970 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.971 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.971 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.971 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.971 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.971 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.971 09:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.971 09:47:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.971 09:47:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.971 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.971 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.971 "name": "Existed_Raid", 00:10:12.971 "uuid": "db4bce32-8cd1-4bb7-ae22-587161fea693", 00:10:12.971 "strip_size_kb": 64, 00:10:12.971 "state": "configuring", 00:10:12.971 "raid_level": "raid0", 00:10:12.971 "superblock": true, 00:10:12.971 "num_base_bdevs": 4, 00:10:12.971 "num_base_bdevs_discovered": 0, 00:10:12.971 "num_base_bdevs_operational": 4, 00:10:12.971 "base_bdevs_list": [ 00:10:12.971 { 00:10:12.971 "name": "BaseBdev1", 00:10:12.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.971 "is_configured": false, 00:10:12.971 "data_offset": 0, 00:10:12.971 "data_size": 0 00:10:12.971 }, 00:10:12.971 { 00:10:12.971 "name": "BaseBdev2", 00:10:12.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.971 "is_configured": false, 00:10:12.971 "data_offset": 0, 00:10:12.971 "data_size": 0 00:10:12.971 }, 00:10:12.971 { 00:10:12.971 "name": "BaseBdev3", 00:10:12.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.971 "is_configured": false, 00:10:12.971 "data_offset": 0, 00:10:12.971 "data_size": 0 00:10:12.971 }, 00:10:12.971 { 00:10:12.971 "name": "BaseBdev4", 00:10:12.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.971 "is_configured": false, 00:10:12.971 "data_offset": 0, 00:10:12.971 "data_size": 0 00:10:12.971 } 00:10:12.971 ] 00:10:12.971 }' 00:10:12.971 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.971 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.230 [2024-12-06 09:47:38.382009] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:13.230 [2024-12-06 09:47:38.382110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.230 [2024-12-06 09:47:38.393991] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:13.230 [2024-12-06 09:47:38.394068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:13.230 [2024-12-06 09:47:38.394094] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:13.230 [2024-12-06 09:47:38.394116] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:13.230 [2024-12-06 09:47:38.394134] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:13.230 [2024-12-06 09:47:38.394164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:13.230 [2024-12-06 09:47:38.394198] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:13.230 [2024-12-06 09:47:38.394219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.230 [2024-12-06 09:47:38.442409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:13.230 BaseBdev1 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.230 [ 00:10:13.230 { 00:10:13.230 "name": "BaseBdev1", 00:10:13.230 "aliases": [ 00:10:13.230 "0f033550-2d3e-41d4-9195-2e1ded56dbc9" 00:10:13.230 ], 00:10:13.230 "product_name": "Malloc disk", 00:10:13.230 "block_size": 512, 00:10:13.230 "num_blocks": 65536, 00:10:13.230 "uuid": "0f033550-2d3e-41d4-9195-2e1ded56dbc9", 00:10:13.230 "assigned_rate_limits": { 00:10:13.230 "rw_ios_per_sec": 0, 00:10:13.230 "rw_mbytes_per_sec": 0, 00:10:13.230 "r_mbytes_per_sec": 0, 00:10:13.230 "w_mbytes_per_sec": 0 00:10:13.230 }, 00:10:13.230 "claimed": true, 00:10:13.230 "claim_type": "exclusive_write", 00:10:13.230 "zoned": false, 00:10:13.230 "supported_io_types": { 00:10:13.230 "read": true, 00:10:13.230 "write": true, 00:10:13.230 "unmap": true, 00:10:13.230 "flush": true, 00:10:13.230 "reset": true, 00:10:13.230 "nvme_admin": false, 00:10:13.230 "nvme_io": false, 00:10:13.230 "nvme_io_md": false, 00:10:13.230 "write_zeroes": true, 00:10:13.230 "zcopy": true, 00:10:13.230 "get_zone_info": false, 00:10:13.230 "zone_management": false, 00:10:13.230 "zone_append": false, 00:10:13.230 "compare": false, 00:10:13.230 "compare_and_write": false, 00:10:13.230 "abort": true, 00:10:13.230 "seek_hole": false, 00:10:13.230 "seek_data": false, 00:10:13.230 "copy": true, 00:10:13.230 "nvme_iov_md": false 00:10:13.230 }, 00:10:13.230 "memory_domains": [ 00:10:13.230 { 00:10:13.230 "dma_device_id": "system", 00:10:13.230 "dma_device_type": 1 00:10:13.230 }, 00:10:13.230 { 00:10:13.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.230 "dma_device_type": 2 00:10:13.230 } 00:10:13.230 ], 00:10:13.230 "driver_specific": {} 00:10:13.230 } 00:10:13.230 ] 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.230 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.489 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.489 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.489 "name": "Existed_Raid", 00:10:13.489 "uuid": "0d424f01-4154-4728-85a8-cb1d32a69a58", 00:10:13.489 "strip_size_kb": 64, 00:10:13.489 "state": "configuring", 00:10:13.489 "raid_level": "raid0", 00:10:13.489 "superblock": true, 00:10:13.489 "num_base_bdevs": 4, 00:10:13.489 "num_base_bdevs_discovered": 1, 00:10:13.489 "num_base_bdevs_operational": 4, 00:10:13.489 "base_bdevs_list": [ 00:10:13.489 { 00:10:13.489 "name": "BaseBdev1", 00:10:13.489 "uuid": "0f033550-2d3e-41d4-9195-2e1ded56dbc9", 00:10:13.489 "is_configured": true, 00:10:13.489 "data_offset": 2048, 00:10:13.489 "data_size": 63488 00:10:13.489 }, 00:10:13.489 { 00:10:13.489 "name": "BaseBdev2", 00:10:13.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.489 "is_configured": false, 00:10:13.489 "data_offset": 0, 00:10:13.489 "data_size": 0 00:10:13.489 }, 00:10:13.489 { 00:10:13.489 "name": "BaseBdev3", 00:10:13.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.489 "is_configured": false, 00:10:13.489 "data_offset": 0, 00:10:13.490 "data_size": 0 00:10:13.490 }, 00:10:13.490 { 00:10:13.490 "name": "BaseBdev4", 00:10:13.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.490 "is_configured": false, 00:10:13.490 "data_offset": 0, 00:10:13.490 "data_size": 0 00:10:13.490 } 00:10:13.490 ] 00:10:13.490 }' 00:10:13.490 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.490 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.749 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:13.749 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.749 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.749 [2024-12-06 09:47:38.893702] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:13.749 [2024-12-06 09:47:38.893801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:13.749 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.749 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:13.749 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.749 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.749 [2024-12-06 09:47:38.905734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:13.749 [2024-12-06 09:47:38.907685] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:13.749 [2024-12-06 09:47:38.907773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:13.749 [2024-12-06 09:47:38.907804] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:13.749 [2024-12-06 09:47:38.907830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:13.749 [2024-12-06 09:47:38.907849] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:13.749 [2024-12-06 09:47:38.907870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:13.749 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.749 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:13.749 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:13.749 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:13.749 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.749 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.749 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.749 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.749 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.749 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.749 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.749 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.749 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.749 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.749 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.749 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.749 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.749 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.749 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.749 "name": "Existed_Raid", 00:10:13.749 "uuid": "d642a28e-d7f6-41f3-bc28-70acfec1f7b0", 00:10:13.749 "strip_size_kb": 64, 00:10:13.749 "state": "configuring", 00:10:13.749 "raid_level": "raid0", 00:10:13.749 "superblock": true, 00:10:13.749 "num_base_bdevs": 4, 00:10:13.749 "num_base_bdevs_discovered": 1, 00:10:13.749 "num_base_bdevs_operational": 4, 00:10:13.749 "base_bdevs_list": [ 00:10:13.749 { 00:10:13.749 "name": "BaseBdev1", 00:10:13.749 "uuid": "0f033550-2d3e-41d4-9195-2e1ded56dbc9", 00:10:13.749 "is_configured": true, 00:10:13.749 "data_offset": 2048, 00:10:13.749 "data_size": 63488 00:10:13.749 }, 00:10:13.749 { 00:10:13.749 "name": "BaseBdev2", 00:10:13.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.749 "is_configured": false, 00:10:13.749 "data_offset": 0, 00:10:13.749 "data_size": 0 00:10:13.749 }, 00:10:13.749 { 00:10:13.749 "name": "BaseBdev3", 00:10:13.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.749 "is_configured": false, 00:10:13.750 "data_offset": 0, 00:10:13.750 "data_size": 0 00:10:13.750 }, 00:10:13.750 { 00:10:13.750 "name": "BaseBdev4", 00:10:13.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.750 "is_configured": false, 00:10:13.750 "data_offset": 0, 00:10:13.750 "data_size": 0 00:10:13.750 } 00:10:13.750 ] 00:10:13.750 }' 00:10:13.750 09:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.750 09:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.363 [2024-12-06 09:47:39.367392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:14.363 BaseBdev2 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.363 [ 00:10:14.363 { 00:10:14.363 "name": "BaseBdev2", 00:10:14.363 "aliases": [ 00:10:14.363 "a69764fe-a077-4714-8516-1e4091962861" 00:10:14.363 ], 00:10:14.363 "product_name": "Malloc disk", 00:10:14.363 "block_size": 512, 00:10:14.363 "num_blocks": 65536, 00:10:14.363 "uuid": "a69764fe-a077-4714-8516-1e4091962861", 00:10:14.363 "assigned_rate_limits": { 00:10:14.363 "rw_ios_per_sec": 0, 00:10:14.363 "rw_mbytes_per_sec": 0, 00:10:14.363 "r_mbytes_per_sec": 0, 00:10:14.363 "w_mbytes_per_sec": 0 00:10:14.363 }, 00:10:14.363 "claimed": true, 00:10:14.363 "claim_type": "exclusive_write", 00:10:14.363 "zoned": false, 00:10:14.363 "supported_io_types": { 00:10:14.363 "read": true, 00:10:14.363 "write": true, 00:10:14.363 "unmap": true, 00:10:14.363 "flush": true, 00:10:14.363 "reset": true, 00:10:14.363 "nvme_admin": false, 00:10:14.363 "nvme_io": false, 00:10:14.363 "nvme_io_md": false, 00:10:14.363 "write_zeroes": true, 00:10:14.363 "zcopy": true, 00:10:14.363 "get_zone_info": false, 00:10:14.363 "zone_management": false, 00:10:14.363 "zone_append": false, 00:10:14.363 "compare": false, 00:10:14.363 "compare_and_write": false, 00:10:14.363 "abort": true, 00:10:14.363 "seek_hole": false, 00:10:14.363 "seek_data": false, 00:10:14.363 "copy": true, 00:10:14.363 "nvme_iov_md": false 00:10:14.363 }, 00:10:14.363 "memory_domains": [ 00:10:14.363 { 00:10:14.363 "dma_device_id": "system", 00:10:14.363 "dma_device_type": 1 00:10:14.363 }, 00:10:14.363 { 00:10:14.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.363 "dma_device_type": 2 00:10:14.363 } 00:10:14.363 ], 00:10:14.363 "driver_specific": {} 00:10:14.363 } 00:10:14.363 ] 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.363 "name": "Existed_Raid", 00:10:14.363 "uuid": "d642a28e-d7f6-41f3-bc28-70acfec1f7b0", 00:10:14.363 "strip_size_kb": 64, 00:10:14.363 "state": "configuring", 00:10:14.363 "raid_level": "raid0", 00:10:14.363 "superblock": true, 00:10:14.363 "num_base_bdevs": 4, 00:10:14.363 "num_base_bdevs_discovered": 2, 00:10:14.363 "num_base_bdevs_operational": 4, 00:10:14.363 "base_bdevs_list": [ 00:10:14.363 { 00:10:14.363 "name": "BaseBdev1", 00:10:14.363 "uuid": "0f033550-2d3e-41d4-9195-2e1ded56dbc9", 00:10:14.363 "is_configured": true, 00:10:14.363 "data_offset": 2048, 00:10:14.363 "data_size": 63488 00:10:14.363 }, 00:10:14.363 { 00:10:14.363 "name": "BaseBdev2", 00:10:14.363 "uuid": "a69764fe-a077-4714-8516-1e4091962861", 00:10:14.363 "is_configured": true, 00:10:14.363 "data_offset": 2048, 00:10:14.363 "data_size": 63488 00:10:14.363 }, 00:10:14.363 { 00:10:14.363 "name": "BaseBdev3", 00:10:14.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.363 "is_configured": false, 00:10:14.363 "data_offset": 0, 00:10:14.363 "data_size": 0 00:10:14.363 }, 00:10:14.363 { 00:10:14.363 "name": "BaseBdev4", 00:10:14.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.363 "is_configured": false, 00:10:14.363 "data_offset": 0, 00:10:14.363 "data_size": 0 00:10:14.363 } 00:10:14.363 ] 00:10:14.363 }' 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.363 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.624 [2024-12-06 09:47:39.830297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:14.624 BaseBdev3 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.624 [ 00:10:14.624 { 00:10:14.624 "name": "BaseBdev3", 00:10:14.624 "aliases": [ 00:10:14.624 "ec47d12b-8453-4386-826d-ad0a79bf3925" 00:10:14.624 ], 00:10:14.624 "product_name": "Malloc disk", 00:10:14.624 "block_size": 512, 00:10:14.624 "num_blocks": 65536, 00:10:14.624 "uuid": "ec47d12b-8453-4386-826d-ad0a79bf3925", 00:10:14.624 "assigned_rate_limits": { 00:10:14.624 "rw_ios_per_sec": 0, 00:10:14.624 "rw_mbytes_per_sec": 0, 00:10:14.624 "r_mbytes_per_sec": 0, 00:10:14.624 "w_mbytes_per_sec": 0 00:10:14.624 }, 00:10:14.624 "claimed": true, 00:10:14.624 "claim_type": "exclusive_write", 00:10:14.624 "zoned": false, 00:10:14.624 "supported_io_types": { 00:10:14.624 "read": true, 00:10:14.624 "write": true, 00:10:14.624 "unmap": true, 00:10:14.624 "flush": true, 00:10:14.624 "reset": true, 00:10:14.624 "nvme_admin": false, 00:10:14.624 "nvme_io": false, 00:10:14.624 "nvme_io_md": false, 00:10:14.624 "write_zeroes": true, 00:10:14.624 "zcopy": true, 00:10:14.624 "get_zone_info": false, 00:10:14.624 "zone_management": false, 00:10:14.624 "zone_append": false, 00:10:14.624 "compare": false, 00:10:14.624 "compare_and_write": false, 00:10:14.624 "abort": true, 00:10:14.624 "seek_hole": false, 00:10:14.624 "seek_data": false, 00:10:14.624 "copy": true, 00:10:14.624 "nvme_iov_md": false 00:10:14.624 }, 00:10:14.624 "memory_domains": [ 00:10:14.624 { 00:10:14.624 "dma_device_id": "system", 00:10:14.624 "dma_device_type": 1 00:10:14.624 }, 00:10:14.624 { 00:10:14.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.624 "dma_device_type": 2 00:10:14.624 } 00:10:14.624 ], 00:10:14.624 "driver_specific": {} 00:10:14.624 } 00:10:14.624 ] 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.624 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.885 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.885 "name": "Existed_Raid", 00:10:14.885 "uuid": "d642a28e-d7f6-41f3-bc28-70acfec1f7b0", 00:10:14.885 "strip_size_kb": 64, 00:10:14.885 "state": "configuring", 00:10:14.885 "raid_level": "raid0", 00:10:14.885 "superblock": true, 00:10:14.885 "num_base_bdevs": 4, 00:10:14.885 "num_base_bdevs_discovered": 3, 00:10:14.885 "num_base_bdevs_operational": 4, 00:10:14.885 "base_bdevs_list": [ 00:10:14.885 { 00:10:14.885 "name": "BaseBdev1", 00:10:14.885 "uuid": "0f033550-2d3e-41d4-9195-2e1ded56dbc9", 00:10:14.885 "is_configured": true, 00:10:14.885 "data_offset": 2048, 00:10:14.885 "data_size": 63488 00:10:14.885 }, 00:10:14.885 { 00:10:14.885 "name": "BaseBdev2", 00:10:14.885 "uuid": "a69764fe-a077-4714-8516-1e4091962861", 00:10:14.885 "is_configured": true, 00:10:14.885 "data_offset": 2048, 00:10:14.885 "data_size": 63488 00:10:14.885 }, 00:10:14.885 { 00:10:14.885 "name": "BaseBdev3", 00:10:14.885 "uuid": "ec47d12b-8453-4386-826d-ad0a79bf3925", 00:10:14.885 "is_configured": true, 00:10:14.885 "data_offset": 2048, 00:10:14.885 "data_size": 63488 00:10:14.885 }, 00:10:14.885 { 00:10:14.885 "name": "BaseBdev4", 00:10:14.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.885 "is_configured": false, 00:10:14.885 "data_offset": 0, 00:10:14.885 "data_size": 0 00:10:14.885 } 00:10:14.885 ] 00:10:14.885 }' 00:10:14.885 09:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.885 09:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.146 [2024-12-06 09:47:40.352125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:15.146 [2024-12-06 09:47:40.352553] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:15.146 [2024-12-06 09:47:40.352612] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:15.146 [2024-12-06 09:47:40.352913] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:15.146 BaseBdev4 00:10:15.146 [2024-12-06 09:47:40.353108] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:15.146 [2024-12-06 09:47:40.353122] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:15.146 [2024-12-06 09:47:40.353283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.146 [ 00:10:15.146 { 00:10:15.146 "name": "BaseBdev4", 00:10:15.146 "aliases": [ 00:10:15.146 "b8a864ca-fc8a-4a8d-a4ea-bf3ba9a05f0d" 00:10:15.146 ], 00:10:15.146 "product_name": "Malloc disk", 00:10:15.146 "block_size": 512, 00:10:15.146 "num_blocks": 65536, 00:10:15.146 "uuid": "b8a864ca-fc8a-4a8d-a4ea-bf3ba9a05f0d", 00:10:15.146 "assigned_rate_limits": { 00:10:15.146 "rw_ios_per_sec": 0, 00:10:15.146 "rw_mbytes_per_sec": 0, 00:10:15.146 "r_mbytes_per_sec": 0, 00:10:15.146 "w_mbytes_per_sec": 0 00:10:15.146 }, 00:10:15.146 "claimed": true, 00:10:15.146 "claim_type": "exclusive_write", 00:10:15.146 "zoned": false, 00:10:15.146 "supported_io_types": { 00:10:15.146 "read": true, 00:10:15.146 "write": true, 00:10:15.146 "unmap": true, 00:10:15.146 "flush": true, 00:10:15.146 "reset": true, 00:10:15.146 "nvme_admin": false, 00:10:15.146 "nvme_io": false, 00:10:15.146 "nvme_io_md": false, 00:10:15.146 "write_zeroes": true, 00:10:15.146 "zcopy": true, 00:10:15.146 "get_zone_info": false, 00:10:15.146 "zone_management": false, 00:10:15.146 "zone_append": false, 00:10:15.146 "compare": false, 00:10:15.146 "compare_and_write": false, 00:10:15.146 "abort": true, 00:10:15.146 "seek_hole": false, 00:10:15.146 "seek_data": false, 00:10:15.146 "copy": true, 00:10:15.146 "nvme_iov_md": false 00:10:15.146 }, 00:10:15.146 "memory_domains": [ 00:10:15.146 { 00:10:15.146 "dma_device_id": "system", 00:10:15.146 "dma_device_type": 1 00:10:15.146 }, 00:10:15.146 { 00:10:15.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.146 "dma_device_type": 2 00:10:15.146 } 00:10:15.146 ], 00:10:15.146 "driver_specific": {} 00:10:15.146 } 00:10:15.146 ] 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.146 09:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.406 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.406 "name": "Existed_Raid", 00:10:15.406 "uuid": "d642a28e-d7f6-41f3-bc28-70acfec1f7b0", 00:10:15.406 "strip_size_kb": 64, 00:10:15.406 "state": "online", 00:10:15.406 "raid_level": "raid0", 00:10:15.406 "superblock": true, 00:10:15.406 "num_base_bdevs": 4, 00:10:15.406 "num_base_bdevs_discovered": 4, 00:10:15.406 "num_base_bdevs_operational": 4, 00:10:15.406 "base_bdevs_list": [ 00:10:15.406 { 00:10:15.406 "name": "BaseBdev1", 00:10:15.406 "uuid": "0f033550-2d3e-41d4-9195-2e1ded56dbc9", 00:10:15.406 "is_configured": true, 00:10:15.406 "data_offset": 2048, 00:10:15.406 "data_size": 63488 00:10:15.406 }, 00:10:15.406 { 00:10:15.406 "name": "BaseBdev2", 00:10:15.406 "uuid": "a69764fe-a077-4714-8516-1e4091962861", 00:10:15.406 "is_configured": true, 00:10:15.406 "data_offset": 2048, 00:10:15.406 "data_size": 63488 00:10:15.406 }, 00:10:15.406 { 00:10:15.406 "name": "BaseBdev3", 00:10:15.406 "uuid": "ec47d12b-8453-4386-826d-ad0a79bf3925", 00:10:15.406 "is_configured": true, 00:10:15.406 "data_offset": 2048, 00:10:15.406 "data_size": 63488 00:10:15.406 }, 00:10:15.406 { 00:10:15.406 "name": "BaseBdev4", 00:10:15.406 "uuid": "b8a864ca-fc8a-4a8d-a4ea-bf3ba9a05f0d", 00:10:15.406 "is_configured": true, 00:10:15.406 "data_offset": 2048, 00:10:15.406 "data_size": 63488 00:10:15.406 } 00:10:15.406 ] 00:10:15.406 }' 00:10:15.406 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.406 09:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.665 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:15.665 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:15.665 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:15.665 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:15.665 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:15.665 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:15.665 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:15.665 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:15.665 09:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.665 09:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.665 [2024-12-06 09:47:40.795838] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.665 09:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.665 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:15.665 "name": "Existed_Raid", 00:10:15.665 "aliases": [ 00:10:15.665 "d642a28e-d7f6-41f3-bc28-70acfec1f7b0" 00:10:15.665 ], 00:10:15.665 "product_name": "Raid Volume", 00:10:15.665 "block_size": 512, 00:10:15.665 "num_blocks": 253952, 00:10:15.665 "uuid": "d642a28e-d7f6-41f3-bc28-70acfec1f7b0", 00:10:15.665 "assigned_rate_limits": { 00:10:15.665 "rw_ios_per_sec": 0, 00:10:15.665 "rw_mbytes_per_sec": 0, 00:10:15.665 "r_mbytes_per_sec": 0, 00:10:15.665 "w_mbytes_per_sec": 0 00:10:15.665 }, 00:10:15.665 "claimed": false, 00:10:15.665 "zoned": false, 00:10:15.665 "supported_io_types": { 00:10:15.665 "read": true, 00:10:15.665 "write": true, 00:10:15.666 "unmap": true, 00:10:15.666 "flush": true, 00:10:15.666 "reset": true, 00:10:15.666 "nvme_admin": false, 00:10:15.666 "nvme_io": false, 00:10:15.666 "nvme_io_md": false, 00:10:15.666 "write_zeroes": true, 00:10:15.666 "zcopy": false, 00:10:15.666 "get_zone_info": false, 00:10:15.666 "zone_management": false, 00:10:15.666 "zone_append": false, 00:10:15.666 "compare": false, 00:10:15.666 "compare_and_write": false, 00:10:15.666 "abort": false, 00:10:15.666 "seek_hole": false, 00:10:15.666 "seek_data": false, 00:10:15.666 "copy": false, 00:10:15.666 "nvme_iov_md": false 00:10:15.666 }, 00:10:15.666 "memory_domains": [ 00:10:15.666 { 00:10:15.666 "dma_device_id": "system", 00:10:15.666 "dma_device_type": 1 00:10:15.666 }, 00:10:15.666 { 00:10:15.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.666 "dma_device_type": 2 00:10:15.666 }, 00:10:15.666 { 00:10:15.666 "dma_device_id": "system", 00:10:15.666 "dma_device_type": 1 00:10:15.666 }, 00:10:15.666 { 00:10:15.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.666 "dma_device_type": 2 00:10:15.666 }, 00:10:15.666 { 00:10:15.666 "dma_device_id": "system", 00:10:15.666 "dma_device_type": 1 00:10:15.666 }, 00:10:15.666 { 00:10:15.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.666 "dma_device_type": 2 00:10:15.666 }, 00:10:15.666 { 00:10:15.666 "dma_device_id": "system", 00:10:15.666 "dma_device_type": 1 00:10:15.666 }, 00:10:15.666 { 00:10:15.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.666 "dma_device_type": 2 00:10:15.666 } 00:10:15.666 ], 00:10:15.666 "driver_specific": { 00:10:15.666 "raid": { 00:10:15.666 "uuid": "d642a28e-d7f6-41f3-bc28-70acfec1f7b0", 00:10:15.666 "strip_size_kb": 64, 00:10:15.666 "state": "online", 00:10:15.666 "raid_level": "raid0", 00:10:15.666 "superblock": true, 00:10:15.666 "num_base_bdevs": 4, 00:10:15.666 "num_base_bdevs_discovered": 4, 00:10:15.666 "num_base_bdevs_operational": 4, 00:10:15.666 "base_bdevs_list": [ 00:10:15.666 { 00:10:15.666 "name": "BaseBdev1", 00:10:15.666 "uuid": "0f033550-2d3e-41d4-9195-2e1ded56dbc9", 00:10:15.666 "is_configured": true, 00:10:15.666 "data_offset": 2048, 00:10:15.666 "data_size": 63488 00:10:15.666 }, 00:10:15.666 { 00:10:15.666 "name": "BaseBdev2", 00:10:15.666 "uuid": "a69764fe-a077-4714-8516-1e4091962861", 00:10:15.666 "is_configured": true, 00:10:15.666 "data_offset": 2048, 00:10:15.666 "data_size": 63488 00:10:15.666 }, 00:10:15.666 { 00:10:15.666 "name": "BaseBdev3", 00:10:15.666 "uuid": "ec47d12b-8453-4386-826d-ad0a79bf3925", 00:10:15.666 "is_configured": true, 00:10:15.666 "data_offset": 2048, 00:10:15.666 "data_size": 63488 00:10:15.666 }, 00:10:15.666 { 00:10:15.666 "name": "BaseBdev4", 00:10:15.666 "uuid": "b8a864ca-fc8a-4a8d-a4ea-bf3ba9a05f0d", 00:10:15.666 "is_configured": true, 00:10:15.666 "data_offset": 2048, 00:10:15.666 "data_size": 63488 00:10:15.666 } 00:10:15.666 ] 00:10:15.666 } 00:10:15.666 } 00:10:15.666 }' 00:10:15.666 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:15.666 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:15.666 BaseBdev2 00:10:15.666 BaseBdev3 00:10:15.666 BaseBdev4' 00:10:15.666 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.666 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:15.666 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.666 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:15.666 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.666 09:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.666 09:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.927 09:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.927 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.927 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.927 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.927 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.927 09:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:15.927 09:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.927 09:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.927 09:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.927 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.927 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.927 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.927 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.927 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:15.927 09:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.927 09:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.927 09:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.927 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.927 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.927 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.927 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:15.927 09:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.927 09:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.928 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.928 09:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.928 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.928 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.928 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:15.928 09:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.928 09:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.928 [2024-12-06 09:47:41.094994] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:15.928 [2024-12-06 09:47:41.095071] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:15.928 [2024-12-06 09:47:41.095174] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:15.928 09:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.928 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:15.928 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:15.928 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:15.928 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:15.928 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:15.928 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:15.928 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.928 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:15.928 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.928 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.928 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.928 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.928 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.928 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.928 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.188 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.188 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.188 09:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.188 09:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.188 09:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.188 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.188 "name": "Existed_Raid", 00:10:16.188 "uuid": "d642a28e-d7f6-41f3-bc28-70acfec1f7b0", 00:10:16.188 "strip_size_kb": 64, 00:10:16.188 "state": "offline", 00:10:16.188 "raid_level": "raid0", 00:10:16.188 "superblock": true, 00:10:16.188 "num_base_bdevs": 4, 00:10:16.188 "num_base_bdevs_discovered": 3, 00:10:16.188 "num_base_bdevs_operational": 3, 00:10:16.188 "base_bdevs_list": [ 00:10:16.188 { 00:10:16.188 "name": null, 00:10:16.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.188 "is_configured": false, 00:10:16.188 "data_offset": 0, 00:10:16.188 "data_size": 63488 00:10:16.188 }, 00:10:16.188 { 00:10:16.188 "name": "BaseBdev2", 00:10:16.188 "uuid": "a69764fe-a077-4714-8516-1e4091962861", 00:10:16.188 "is_configured": true, 00:10:16.188 "data_offset": 2048, 00:10:16.188 "data_size": 63488 00:10:16.188 }, 00:10:16.188 { 00:10:16.188 "name": "BaseBdev3", 00:10:16.188 "uuid": "ec47d12b-8453-4386-826d-ad0a79bf3925", 00:10:16.188 "is_configured": true, 00:10:16.188 "data_offset": 2048, 00:10:16.188 "data_size": 63488 00:10:16.188 }, 00:10:16.188 { 00:10:16.188 "name": "BaseBdev4", 00:10:16.188 "uuid": "b8a864ca-fc8a-4a8d-a4ea-bf3ba9a05f0d", 00:10:16.188 "is_configured": true, 00:10:16.188 "data_offset": 2048, 00:10:16.188 "data_size": 63488 00:10:16.188 } 00:10:16.188 ] 00:10:16.188 }' 00:10:16.188 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.188 09:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.448 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:16.448 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:16.448 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.448 09:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.448 09:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.448 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:16.448 09:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.448 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:16.448 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:16.448 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:16.448 09:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.448 09:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.448 [2024-12-06 09:47:41.657215] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:16.709 09:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.709 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:16.709 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:16.709 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.709 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:16.709 09:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.709 09:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.709 09:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.709 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:16.709 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:16.709 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:16.709 09:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.709 09:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.709 [2024-12-06 09:47:41.812286] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:16.709 09:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.709 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:16.709 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:16.709 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.709 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:16.709 09:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.709 09:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.709 09:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.709 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:16.709 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:16.709 09:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:16.709 09:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.709 09:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.709 [2024-12-06 09:47:41.968702] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:16.709 [2024-12-06 09:47:41.968794] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:16.970 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.970 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:16.970 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:16.970 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.970 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:16.970 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.970 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.970 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.970 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:16.970 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:16.970 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:16.970 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:16.970 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:16.970 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:16.970 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.970 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.970 BaseBdev2 00:10:16.970 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.970 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:16.970 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:16.970 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:16.970 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:16.970 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:16.970 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:16.970 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:16.970 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.970 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.970 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.970 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:16.970 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.970 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.970 [ 00:10:16.970 { 00:10:16.970 "name": "BaseBdev2", 00:10:16.970 "aliases": [ 00:10:16.970 "961d3340-e8d5-4c61-aaec-83aaa504c7a6" 00:10:16.970 ], 00:10:16.970 "product_name": "Malloc disk", 00:10:16.970 "block_size": 512, 00:10:16.970 "num_blocks": 65536, 00:10:16.970 "uuid": "961d3340-e8d5-4c61-aaec-83aaa504c7a6", 00:10:16.970 "assigned_rate_limits": { 00:10:16.970 "rw_ios_per_sec": 0, 00:10:16.970 "rw_mbytes_per_sec": 0, 00:10:16.970 "r_mbytes_per_sec": 0, 00:10:16.970 "w_mbytes_per_sec": 0 00:10:16.970 }, 00:10:16.970 "claimed": false, 00:10:16.970 "zoned": false, 00:10:16.970 "supported_io_types": { 00:10:16.970 "read": true, 00:10:16.970 "write": true, 00:10:16.970 "unmap": true, 00:10:16.970 "flush": true, 00:10:16.970 "reset": true, 00:10:16.970 "nvme_admin": false, 00:10:16.970 "nvme_io": false, 00:10:16.970 "nvme_io_md": false, 00:10:16.970 "write_zeroes": true, 00:10:16.970 "zcopy": true, 00:10:16.970 "get_zone_info": false, 00:10:16.970 "zone_management": false, 00:10:16.970 "zone_append": false, 00:10:16.970 "compare": false, 00:10:16.970 "compare_and_write": false, 00:10:16.970 "abort": true, 00:10:16.970 "seek_hole": false, 00:10:16.970 "seek_data": false, 00:10:16.970 "copy": true, 00:10:16.971 "nvme_iov_md": false 00:10:16.971 }, 00:10:16.971 "memory_domains": [ 00:10:16.971 { 00:10:16.971 "dma_device_id": "system", 00:10:16.971 "dma_device_type": 1 00:10:16.971 }, 00:10:16.971 { 00:10:16.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.971 "dma_device_type": 2 00:10:16.971 } 00:10:16.971 ], 00:10:16.971 "driver_specific": {} 00:10:16.971 } 00:10:16.971 ] 00:10:16.971 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.971 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:16.971 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:16.971 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:16.971 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:16.971 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.971 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.231 BaseBdev3 00:10:17.231 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.231 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:17.231 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:17.231 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:17.231 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:17.231 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:17.231 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:17.231 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:17.231 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.231 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.231 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.231 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:17.231 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.231 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.231 [ 00:10:17.231 { 00:10:17.231 "name": "BaseBdev3", 00:10:17.231 "aliases": [ 00:10:17.231 "a34cfb39-5187-4533-85f9-914b6431a951" 00:10:17.231 ], 00:10:17.231 "product_name": "Malloc disk", 00:10:17.231 "block_size": 512, 00:10:17.231 "num_blocks": 65536, 00:10:17.231 "uuid": "a34cfb39-5187-4533-85f9-914b6431a951", 00:10:17.231 "assigned_rate_limits": { 00:10:17.231 "rw_ios_per_sec": 0, 00:10:17.231 "rw_mbytes_per_sec": 0, 00:10:17.231 "r_mbytes_per_sec": 0, 00:10:17.231 "w_mbytes_per_sec": 0 00:10:17.231 }, 00:10:17.231 "claimed": false, 00:10:17.231 "zoned": false, 00:10:17.231 "supported_io_types": { 00:10:17.231 "read": true, 00:10:17.231 "write": true, 00:10:17.231 "unmap": true, 00:10:17.231 "flush": true, 00:10:17.231 "reset": true, 00:10:17.231 "nvme_admin": false, 00:10:17.231 "nvme_io": false, 00:10:17.231 "nvme_io_md": false, 00:10:17.231 "write_zeroes": true, 00:10:17.231 "zcopy": true, 00:10:17.231 "get_zone_info": false, 00:10:17.231 "zone_management": false, 00:10:17.231 "zone_append": false, 00:10:17.231 "compare": false, 00:10:17.231 "compare_and_write": false, 00:10:17.231 "abort": true, 00:10:17.231 "seek_hole": false, 00:10:17.231 "seek_data": false, 00:10:17.231 "copy": true, 00:10:17.231 "nvme_iov_md": false 00:10:17.231 }, 00:10:17.231 "memory_domains": [ 00:10:17.231 { 00:10:17.231 "dma_device_id": "system", 00:10:17.231 "dma_device_type": 1 00:10:17.231 }, 00:10:17.231 { 00:10:17.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.231 "dma_device_type": 2 00:10:17.231 } 00:10:17.231 ], 00:10:17.231 "driver_specific": {} 00:10:17.231 } 00:10:17.231 ] 00:10:17.231 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.231 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:17.231 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:17.231 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:17.231 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:17.231 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.231 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.231 BaseBdev4 00:10:17.231 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.231 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:17.231 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:17.231 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:17.231 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:17.231 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:17.231 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:17.231 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:17.231 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.231 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.231 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.231 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:17.231 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.232 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.232 [ 00:10:17.232 { 00:10:17.232 "name": "BaseBdev4", 00:10:17.232 "aliases": [ 00:10:17.232 "7378e1e5-89e2-4df3-9de6-5778819c7b01" 00:10:17.232 ], 00:10:17.232 "product_name": "Malloc disk", 00:10:17.232 "block_size": 512, 00:10:17.232 "num_blocks": 65536, 00:10:17.232 "uuid": "7378e1e5-89e2-4df3-9de6-5778819c7b01", 00:10:17.232 "assigned_rate_limits": { 00:10:17.232 "rw_ios_per_sec": 0, 00:10:17.232 "rw_mbytes_per_sec": 0, 00:10:17.232 "r_mbytes_per_sec": 0, 00:10:17.232 "w_mbytes_per_sec": 0 00:10:17.232 }, 00:10:17.232 "claimed": false, 00:10:17.232 "zoned": false, 00:10:17.232 "supported_io_types": { 00:10:17.232 "read": true, 00:10:17.232 "write": true, 00:10:17.232 "unmap": true, 00:10:17.232 "flush": true, 00:10:17.232 "reset": true, 00:10:17.232 "nvme_admin": false, 00:10:17.232 "nvme_io": false, 00:10:17.232 "nvme_io_md": false, 00:10:17.232 "write_zeroes": true, 00:10:17.232 "zcopy": true, 00:10:17.232 "get_zone_info": false, 00:10:17.232 "zone_management": false, 00:10:17.232 "zone_append": false, 00:10:17.232 "compare": false, 00:10:17.232 "compare_and_write": false, 00:10:17.232 "abort": true, 00:10:17.232 "seek_hole": false, 00:10:17.232 "seek_data": false, 00:10:17.232 "copy": true, 00:10:17.232 "nvme_iov_md": false 00:10:17.232 }, 00:10:17.232 "memory_domains": [ 00:10:17.232 { 00:10:17.232 "dma_device_id": "system", 00:10:17.232 "dma_device_type": 1 00:10:17.232 }, 00:10:17.232 { 00:10:17.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.232 "dma_device_type": 2 00:10:17.232 } 00:10:17.232 ], 00:10:17.232 "driver_specific": {} 00:10:17.232 } 00:10:17.232 ] 00:10:17.232 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.232 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:17.232 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:17.232 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:17.232 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:17.232 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.232 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.232 [2024-12-06 09:47:42.371794] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:17.232 [2024-12-06 09:47:42.371922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:17.232 [2024-12-06 09:47:42.371957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:17.232 [2024-12-06 09:47:42.374086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:17.232 [2024-12-06 09:47:42.374217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:17.232 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.232 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:17.232 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.232 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.232 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.232 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.232 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.232 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.232 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.232 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.232 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.232 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.232 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.232 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.232 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.232 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.232 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.232 "name": "Existed_Raid", 00:10:17.232 "uuid": "ae3c4609-43b7-4baf-8883-0e5b66daeaa3", 00:10:17.232 "strip_size_kb": 64, 00:10:17.232 "state": "configuring", 00:10:17.232 "raid_level": "raid0", 00:10:17.232 "superblock": true, 00:10:17.232 "num_base_bdevs": 4, 00:10:17.232 "num_base_bdevs_discovered": 3, 00:10:17.232 "num_base_bdevs_operational": 4, 00:10:17.232 "base_bdevs_list": [ 00:10:17.232 { 00:10:17.232 "name": "BaseBdev1", 00:10:17.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.232 "is_configured": false, 00:10:17.232 "data_offset": 0, 00:10:17.232 "data_size": 0 00:10:17.232 }, 00:10:17.232 { 00:10:17.232 "name": "BaseBdev2", 00:10:17.232 "uuid": "961d3340-e8d5-4c61-aaec-83aaa504c7a6", 00:10:17.232 "is_configured": true, 00:10:17.232 "data_offset": 2048, 00:10:17.232 "data_size": 63488 00:10:17.232 }, 00:10:17.232 { 00:10:17.232 "name": "BaseBdev3", 00:10:17.232 "uuid": "a34cfb39-5187-4533-85f9-914b6431a951", 00:10:17.232 "is_configured": true, 00:10:17.232 "data_offset": 2048, 00:10:17.232 "data_size": 63488 00:10:17.232 }, 00:10:17.232 { 00:10:17.232 "name": "BaseBdev4", 00:10:17.232 "uuid": "7378e1e5-89e2-4df3-9de6-5778819c7b01", 00:10:17.232 "is_configured": true, 00:10:17.232 "data_offset": 2048, 00:10:17.232 "data_size": 63488 00:10:17.232 } 00:10:17.232 ] 00:10:17.232 }' 00:10:17.232 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.232 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.802 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:17.802 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.802 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.802 [2024-12-06 09:47:42.779063] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:17.802 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.802 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:17.802 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.802 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.802 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.802 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.802 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.802 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.802 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.802 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.802 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.802 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.802 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.802 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.802 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.802 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.802 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.802 "name": "Existed_Raid", 00:10:17.802 "uuid": "ae3c4609-43b7-4baf-8883-0e5b66daeaa3", 00:10:17.802 "strip_size_kb": 64, 00:10:17.802 "state": "configuring", 00:10:17.802 "raid_level": "raid0", 00:10:17.802 "superblock": true, 00:10:17.802 "num_base_bdevs": 4, 00:10:17.802 "num_base_bdevs_discovered": 2, 00:10:17.802 "num_base_bdevs_operational": 4, 00:10:17.802 "base_bdevs_list": [ 00:10:17.802 { 00:10:17.802 "name": "BaseBdev1", 00:10:17.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.802 "is_configured": false, 00:10:17.802 "data_offset": 0, 00:10:17.802 "data_size": 0 00:10:17.802 }, 00:10:17.802 { 00:10:17.802 "name": null, 00:10:17.802 "uuid": "961d3340-e8d5-4c61-aaec-83aaa504c7a6", 00:10:17.802 "is_configured": false, 00:10:17.802 "data_offset": 0, 00:10:17.802 "data_size": 63488 00:10:17.802 }, 00:10:17.802 { 00:10:17.802 "name": "BaseBdev3", 00:10:17.802 "uuid": "a34cfb39-5187-4533-85f9-914b6431a951", 00:10:17.802 "is_configured": true, 00:10:17.802 "data_offset": 2048, 00:10:17.802 "data_size": 63488 00:10:17.802 }, 00:10:17.802 { 00:10:17.802 "name": "BaseBdev4", 00:10:17.802 "uuid": "7378e1e5-89e2-4df3-9de6-5778819c7b01", 00:10:17.802 "is_configured": true, 00:10:17.802 "data_offset": 2048, 00:10:17.802 "data_size": 63488 00:10:17.802 } 00:10:17.802 ] 00:10:17.802 }' 00:10:17.802 09:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.802 09:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.063 [2024-12-06 09:47:43.232212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:18.063 BaseBdev1 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.063 [ 00:10:18.063 { 00:10:18.063 "name": "BaseBdev1", 00:10:18.063 "aliases": [ 00:10:18.063 "46772a80-16fa-4a06-ad98-a596832860a8" 00:10:18.063 ], 00:10:18.063 "product_name": "Malloc disk", 00:10:18.063 "block_size": 512, 00:10:18.063 "num_blocks": 65536, 00:10:18.063 "uuid": "46772a80-16fa-4a06-ad98-a596832860a8", 00:10:18.063 "assigned_rate_limits": { 00:10:18.063 "rw_ios_per_sec": 0, 00:10:18.063 "rw_mbytes_per_sec": 0, 00:10:18.063 "r_mbytes_per_sec": 0, 00:10:18.063 "w_mbytes_per_sec": 0 00:10:18.063 }, 00:10:18.063 "claimed": true, 00:10:18.063 "claim_type": "exclusive_write", 00:10:18.063 "zoned": false, 00:10:18.063 "supported_io_types": { 00:10:18.063 "read": true, 00:10:18.063 "write": true, 00:10:18.063 "unmap": true, 00:10:18.063 "flush": true, 00:10:18.063 "reset": true, 00:10:18.063 "nvme_admin": false, 00:10:18.063 "nvme_io": false, 00:10:18.063 "nvme_io_md": false, 00:10:18.063 "write_zeroes": true, 00:10:18.063 "zcopy": true, 00:10:18.063 "get_zone_info": false, 00:10:18.063 "zone_management": false, 00:10:18.063 "zone_append": false, 00:10:18.063 "compare": false, 00:10:18.063 "compare_and_write": false, 00:10:18.063 "abort": true, 00:10:18.063 "seek_hole": false, 00:10:18.063 "seek_data": false, 00:10:18.063 "copy": true, 00:10:18.063 "nvme_iov_md": false 00:10:18.063 }, 00:10:18.063 "memory_domains": [ 00:10:18.063 { 00:10:18.063 "dma_device_id": "system", 00:10:18.063 "dma_device_type": 1 00:10:18.063 }, 00:10:18.063 { 00:10:18.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.063 "dma_device_type": 2 00:10:18.063 } 00:10:18.063 ], 00:10:18.063 "driver_specific": {} 00:10:18.063 } 00:10:18.063 ] 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.063 "name": "Existed_Raid", 00:10:18.063 "uuid": "ae3c4609-43b7-4baf-8883-0e5b66daeaa3", 00:10:18.063 "strip_size_kb": 64, 00:10:18.063 "state": "configuring", 00:10:18.063 "raid_level": "raid0", 00:10:18.063 "superblock": true, 00:10:18.063 "num_base_bdevs": 4, 00:10:18.063 "num_base_bdevs_discovered": 3, 00:10:18.063 "num_base_bdevs_operational": 4, 00:10:18.063 "base_bdevs_list": [ 00:10:18.063 { 00:10:18.063 "name": "BaseBdev1", 00:10:18.063 "uuid": "46772a80-16fa-4a06-ad98-a596832860a8", 00:10:18.063 "is_configured": true, 00:10:18.063 "data_offset": 2048, 00:10:18.063 "data_size": 63488 00:10:18.063 }, 00:10:18.063 { 00:10:18.063 "name": null, 00:10:18.063 "uuid": "961d3340-e8d5-4c61-aaec-83aaa504c7a6", 00:10:18.063 "is_configured": false, 00:10:18.063 "data_offset": 0, 00:10:18.063 "data_size": 63488 00:10:18.063 }, 00:10:18.063 { 00:10:18.063 "name": "BaseBdev3", 00:10:18.063 "uuid": "a34cfb39-5187-4533-85f9-914b6431a951", 00:10:18.063 "is_configured": true, 00:10:18.063 "data_offset": 2048, 00:10:18.063 "data_size": 63488 00:10:18.063 }, 00:10:18.063 { 00:10:18.063 "name": "BaseBdev4", 00:10:18.063 "uuid": "7378e1e5-89e2-4df3-9de6-5778819c7b01", 00:10:18.063 "is_configured": true, 00:10:18.063 "data_offset": 2048, 00:10:18.063 "data_size": 63488 00:10:18.063 } 00:10:18.063 ] 00:10:18.063 }' 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.063 09:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.692 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.692 09:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.692 09:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.692 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:18.692 09:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.692 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:18.692 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:18.692 09:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.692 09:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.692 [2024-12-06 09:47:43.747416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:18.692 09:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.692 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:18.692 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.693 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.693 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.693 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.693 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.693 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.693 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.693 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.693 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.693 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.693 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.693 09:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.693 09:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.693 09:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.693 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.693 "name": "Existed_Raid", 00:10:18.693 "uuid": "ae3c4609-43b7-4baf-8883-0e5b66daeaa3", 00:10:18.693 "strip_size_kb": 64, 00:10:18.693 "state": "configuring", 00:10:18.693 "raid_level": "raid0", 00:10:18.693 "superblock": true, 00:10:18.693 "num_base_bdevs": 4, 00:10:18.693 "num_base_bdevs_discovered": 2, 00:10:18.693 "num_base_bdevs_operational": 4, 00:10:18.693 "base_bdevs_list": [ 00:10:18.693 { 00:10:18.693 "name": "BaseBdev1", 00:10:18.693 "uuid": "46772a80-16fa-4a06-ad98-a596832860a8", 00:10:18.693 "is_configured": true, 00:10:18.693 "data_offset": 2048, 00:10:18.693 "data_size": 63488 00:10:18.693 }, 00:10:18.693 { 00:10:18.693 "name": null, 00:10:18.693 "uuid": "961d3340-e8d5-4c61-aaec-83aaa504c7a6", 00:10:18.693 "is_configured": false, 00:10:18.693 "data_offset": 0, 00:10:18.693 "data_size": 63488 00:10:18.693 }, 00:10:18.693 { 00:10:18.693 "name": null, 00:10:18.693 "uuid": "a34cfb39-5187-4533-85f9-914b6431a951", 00:10:18.693 "is_configured": false, 00:10:18.693 "data_offset": 0, 00:10:18.693 "data_size": 63488 00:10:18.693 }, 00:10:18.693 { 00:10:18.693 "name": "BaseBdev4", 00:10:18.693 "uuid": "7378e1e5-89e2-4df3-9de6-5778819c7b01", 00:10:18.693 "is_configured": true, 00:10:18.693 "data_offset": 2048, 00:10:18.693 "data_size": 63488 00:10:18.693 } 00:10:18.693 ] 00:10:18.693 }' 00:10:18.693 09:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.693 09:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.961 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:18.961 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.961 09:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.961 09:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.961 09:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.961 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:18.961 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:18.961 09:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.961 09:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.961 [2024-12-06 09:47:44.182642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:18.961 09:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.961 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:18.961 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.961 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.961 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.961 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.961 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.961 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.961 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.961 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.961 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.961 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.961 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.961 09:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.961 09:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.961 09:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.222 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.222 "name": "Existed_Raid", 00:10:19.222 "uuid": "ae3c4609-43b7-4baf-8883-0e5b66daeaa3", 00:10:19.222 "strip_size_kb": 64, 00:10:19.222 "state": "configuring", 00:10:19.222 "raid_level": "raid0", 00:10:19.222 "superblock": true, 00:10:19.222 "num_base_bdevs": 4, 00:10:19.222 "num_base_bdevs_discovered": 3, 00:10:19.222 "num_base_bdevs_operational": 4, 00:10:19.222 "base_bdevs_list": [ 00:10:19.222 { 00:10:19.222 "name": "BaseBdev1", 00:10:19.222 "uuid": "46772a80-16fa-4a06-ad98-a596832860a8", 00:10:19.222 "is_configured": true, 00:10:19.222 "data_offset": 2048, 00:10:19.222 "data_size": 63488 00:10:19.222 }, 00:10:19.222 { 00:10:19.222 "name": null, 00:10:19.222 "uuid": "961d3340-e8d5-4c61-aaec-83aaa504c7a6", 00:10:19.222 "is_configured": false, 00:10:19.222 "data_offset": 0, 00:10:19.222 "data_size": 63488 00:10:19.222 }, 00:10:19.222 { 00:10:19.222 "name": "BaseBdev3", 00:10:19.222 "uuid": "a34cfb39-5187-4533-85f9-914b6431a951", 00:10:19.222 "is_configured": true, 00:10:19.222 "data_offset": 2048, 00:10:19.222 "data_size": 63488 00:10:19.222 }, 00:10:19.222 { 00:10:19.222 "name": "BaseBdev4", 00:10:19.222 "uuid": "7378e1e5-89e2-4df3-9de6-5778819c7b01", 00:10:19.222 "is_configured": true, 00:10:19.222 "data_offset": 2048, 00:10:19.222 "data_size": 63488 00:10:19.222 } 00:10:19.222 ] 00:10:19.222 }' 00:10:19.222 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.222 09:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.481 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.481 09:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.481 09:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.481 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:19.481 09:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.481 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:19.481 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:19.481 09:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.481 09:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.481 [2024-12-06 09:47:44.689845] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:19.741 09:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.741 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:19.741 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.741 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.741 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.741 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.741 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.741 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.741 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.741 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.741 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.741 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.741 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.741 09:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.741 09:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.741 09:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.741 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.741 "name": "Existed_Raid", 00:10:19.741 "uuid": "ae3c4609-43b7-4baf-8883-0e5b66daeaa3", 00:10:19.741 "strip_size_kb": 64, 00:10:19.741 "state": "configuring", 00:10:19.741 "raid_level": "raid0", 00:10:19.741 "superblock": true, 00:10:19.741 "num_base_bdevs": 4, 00:10:19.741 "num_base_bdevs_discovered": 2, 00:10:19.741 "num_base_bdevs_operational": 4, 00:10:19.741 "base_bdevs_list": [ 00:10:19.741 { 00:10:19.741 "name": null, 00:10:19.741 "uuid": "46772a80-16fa-4a06-ad98-a596832860a8", 00:10:19.741 "is_configured": false, 00:10:19.741 "data_offset": 0, 00:10:19.741 "data_size": 63488 00:10:19.741 }, 00:10:19.741 { 00:10:19.741 "name": null, 00:10:19.741 "uuid": "961d3340-e8d5-4c61-aaec-83aaa504c7a6", 00:10:19.741 "is_configured": false, 00:10:19.741 "data_offset": 0, 00:10:19.741 "data_size": 63488 00:10:19.741 }, 00:10:19.741 { 00:10:19.741 "name": "BaseBdev3", 00:10:19.741 "uuid": "a34cfb39-5187-4533-85f9-914b6431a951", 00:10:19.741 "is_configured": true, 00:10:19.741 "data_offset": 2048, 00:10:19.741 "data_size": 63488 00:10:19.741 }, 00:10:19.741 { 00:10:19.741 "name": "BaseBdev4", 00:10:19.741 "uuid": "7378e1e5-89e2-4df3-9de6-5778819c7b01", 00:10:19.741 "is_configured": true, 00:10:19.741 "data_offset": 2048, 00:10:19.741 "data_size": 63488 00:10:19.741 } 00:10:19.741 ] 00:10:19.741 }' 00:10:19.741 09:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.741 09:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.001 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.001 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.001 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.001 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:20.001 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.001 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:20.001 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:20.001 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.001 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.001 [2024-12-06 09:47:45.267421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:20.001 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.001 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:20.261 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.261 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.261 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.261 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.261 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.261 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.261 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.261 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.261 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.261 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.261 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.261 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.261 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.261 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.261 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.261 "name": "Existed_Raid", 00:10:20.261 "uuid": "ae3c4609-43b7-4baf-8883-0e5b66daeaa3", 00:10:20.261 "strip_size_kb": 64, 00:10:20.261 "state": "configuring", 00:10:20.261 "raid_level": "raid0", 00:10:20.261 "superblock": true, 00:10:20.261 "num_base_bdevs": 4, 00:10:20.261 "num_base_bdevs_discovered": 3, 00:10:20.261 "num_base_bdevs_operational": 4, 00:10:20.261 "base_bdevs_list": [ 00:10:20.261 { 00:10:20.261 "name": null, 00:10:20.261 "uuid": "46772a80-16fa-4a06-ad98-a596832860a8", 00:10:20.261 "is_configured": false, 00:10:20.261 "data_offset": 0, 00:10:20.261 "data_size": 63488 00:10:20.261 }, 00:10:20.261 { 00:10:20.261 "name": "BaseBdev2", 00:10:20.261 "uuid": "961d3340-e8d5-4c61-aaec-83aaa504c7a6", 00:10:20.261 "is_configured": true, 00:10:20.261 "data_offset": 2048, 00:10:20.261 "data_size": 63488 00:10:20.261 }, 00:10:20.261 { 00:10:20.261 "name": "BaseBdev3", 00:10:20.261 "uuid": "a34cfb39-5187-4533-85f9-914b6431a951", 00:10:20.261 "is_configured": true, 00:10:20.261 "data_offset": 2048, 00:10:20.261 "data_size": 63488 00:10:20.261 }, 00:10:20.261 { 00:10:20.261 "name": "BaseBdev4", 00:10:20.261 "uuid": "7378e1e5-89e2-4df3-9de6-5778819c7b01", 00:10:20.261 "is_configured": true, 00:10:20.261 "data_offset": 2048, 00:10:20.261 "data_size": 63488 00:10:20.261 } 00:10:20.261 ] 00:10:20.261 }' 00:10:20.261 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.261 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.519 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.519 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.519 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:20.519 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.519 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.519 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:20.519 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.519 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:20.519 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.519 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.519 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.778 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 46772a80-16fa-4a06-ad98-a596832860a8 00:10:20.778 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.778 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.778 [2024-12-06 09:47:45.842108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:20.778 [2024-12-06 09:47:45.842474] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:20.778 [2024-12-06 09:47:45.842523] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:20.778 [2024-12-06 09:47:45.842803] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:20.778 NewBaseBdev 00:10:20.778 [2024-12-06 09:47:45.842977] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:20.778 [2024-12-06 09:47:45.843005] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:20.778 [2024-12-06 09:47:45.843154] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.778 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.778 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:20.778 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:20.778 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:20.778 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:20.778 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:20.778 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:20.778 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:20.778 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.778 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.778 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.778 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:20.778 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.778 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.778 [ 00:10:20.778 { 00:10:20.778 "name": "NewBaseBdev", 00:10:20.778 "aliases": [ 00:10:20.778 "46772a80-16fa-4a06-ad98-a596832860a8" 00:10:20.778 ], 00:10:20.779 "product_name": "Malloc disk", 00:10:20.779 "block_size": 512, 00:10:20.779 "num_blocks": 65536, 00:10:20.779 "uuid": "46772a80-16fa-4a06-ad98-a596832860a8", 00:10:20.779 "assigned_rate_limits": { 00:10:20.779 "rw_ios_per_sec": 0, 00:10:20.779 "rw_mbytes_per_sec": 0, 00:10:20.779 "r_mbytes_per_sec": 0, 00:10:20.779 "w_mbytes_per_sec": 0 00:10:20.779 }, 00:10:20.779 "claimed": true, 00:10:20.779 "claim_type": "exclusive_write", 00:10:20.779 "zoned": false, 00:10:20.779 "supported_io_types": { 00:10:20.779 "read": true, 00:10:20.779 "write": true, 00:10:20.779 "unmap": true, 00:10:20.779 "flush": true, 00:10:20.779 "reset": true, 00:10:20.779 "nvme_admin": false, 00:10:20.779 "nvme_io": false, 00:10:20.779 "nvme_io_md": false, 00:10:20.779 "write_zeroes": true, 00:10:20.779 "zcopy": true, 00:10:20.779 "get_zone_info": false, 00:10:20.779 "zone_management": false, 00:10:20.779 "zone_append": false, 00:10:20.779 "compare": false, 00:10:20.779 "compare_and_write": false, 00:10:20.779 "abort": true, 00:10:20.779 "seek_hole": false, 00:10:20.779 "seek_data": false, 00:10:20.779 "copy": true, 00:10:20.779 "nvme_iov_md": false 00:10:20.779 }, 00:10:20.779 "memory_domains": [ 00:10:20.779 { 00:10:20.779 "dma_device_id": "system", 00:10:20.779 "dma_device_type": 1 00:10:20.779 }, 00:10:20.779 { 00:10:20.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.779 "dma_device_type": 2 00:10:20.779 } 00:10:20.779 ], 00:10:20.779 "driver_specific": {} 00:10:20.779 } 00:10:20.779 ] 00:10:20.779 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.779 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:20.779 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:20.779 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.779 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.779 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.779 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.779 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.779 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.779 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.779 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.779 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.779 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.779 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.779 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.779 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.779 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.779 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.779 "name": "Existed_Raid", 00:10:20.779 "uuid": "ae3c4609-43b7-4baf-8883-0e5b66daeaa3", 00:10:20.779 "strip_size_kb": 64, 00:10:20.779 "state": "online", 00:10:20.779 "raid_level": "raid0", 00:10:20.779 "superblock": true, 00:10:20.779 "num_base_bdevs": 4, 00:10:20.779 "num_base_bdevs_discovered": 4, 00:10:20.779 "num_base_bdevs_operational": 4, 00:10:20.779 "base_bdevs_list": [ 00:10:20.779 { 00:10:20.779 "name": "NewBaseBdev", 00:10:20.779 "uuid": "46772a80-16fa-4a06-ad98-a596832860a8", 00:10:20.779 "is_configured": true, 00:10:20.779 "data_offset": 2048, 00:10:20.779 "data_size": 63488 00:10:20.779 }, 00:10:20.779 { 00:10:20.779 "name": "BaseBdev2", 00:10:20.779 "uuid": "961d3340-e8d5-4c61-aaec-83aaa504c7a6", 00:10:20.779 "is_configured": true, 00:10:20.779 "data_offset": 2048, 00:10:20.779 "data_size": 63488 00:10:20.779 }, 00:10:20.779 { 00:10:20.779 "name": "BaseBdev3", 00:10:20.779 "uuid": "a34cfb39-5187-4533-85f9-914b6431a951", 00:10:20.779 "is_configured": true, 00:10:20.779 "data_offset": 2048, 00:10:20.779 "data_size": 63488 00:10:20.779 }, 00:10:20.779 { 00:10:20.779 "name": "BaseBdev4", 00:10:20.779 "uuid": "7378e1e5-89e2-4df3-9de6-5778819c7b01", 00:10:20.779 "is_configured": true, 00:10:20.779 "data_offset": 2048, 00:10:20.779 "data_size": 63488 00:10:20.779 } 00:10:20.779 ] 00:10:20.779 }' 00:10:20.779 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.779 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.347 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:21.347 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:21.347 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:21.347 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:21.347 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:21.347 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:21.347 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:21.347 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:21.347 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.347 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.347 [2024-12-06 09:47:46.349655] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:21.347 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.347 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:21.347 "name": "Existed_Raid", 00:10:21.347 "aliases": [ 00:10:21.347 "ae3c4609-43b7-4baf-8883-0e5b66daeaa3" 00:10:21.347 ], 00:10:21.347 "product_name": "Raid Volume", 00:10:21.347 "block_size": 512, 00:10:21.347 "num_blocks": 253952, 00:10:21.347 "uuid": "ae3c4609-43b7-4baf-8883-0e5b66daeaa3", 00:10:21.347 "assigned_rate_limits": { 00:10:21.347 "rw_ios_per_sec": 0, 00:10:21.347 "rw_mbytes_per_sec": 0, 00:10:21.347 "r_mbytes_per_sec": 0, 00:10:21.347 "w_mbytes_per_sec": 0 00:10:21.347 }, 00:10:21.347 "claimed": false, 00:10:21.347 "zoned": false, 00:10:21.347 "supported_io_types": { 00:10:21.347 "read": true, 00:10:21.347 "write": true, 00:10:21.347 "unmap": true, 00:10:21.347 "flush": true, 00:10:21.347 "reset": true, 00:10:21.347 "nvme_admin": false, 00:10:21.347 "nvme_io": false, 00:10:21.347 "nvme_io_md": false, 00:10:21.347 "write_zeroes": true, 00:10:21.347 "zcopy": false, 00:10:21.347 "get_zone_info": false, 00:10:21.347 "zone_management": false, 00:10:21.347 "zone_append": false, 00:10:21.347 "compare": false, 00:10:21.347 "compare_and_write": false, 00:10:21.347 "abort": false, 00:10:21.347 "seek_hole": false, 00:10:21.347 "seek_data": false, 00:10:21.347 "copy": false, 00:10:21.347 "nvme_iov_md": false 00:10:21.347 }, 00:10:21.347 "memory_domains": [ 00:10:21.347 { 00:10:21.347 "dma_device_id": "system", 00:10:21.347 "dma_device_type": 1 00:10:21.347 }, 00:10:21.347 { 00:10:21.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.347 "dma_device_type": 2 00:10:21.347 }, 00:10:21.347 { 00:10:21.347 "dma_device_id": "system", 00:10:21.347 "dma_device_type": 1 00:10:21.347 }, 00:10:21.347 { 00:10:21.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.347 "dma_device_type": 2 00:10:21.347 }, 00:10:21.347 { 00:10:21.347 "dma_device_id": "system", 00:10:21.347 "dma_device_type": 1 00:10:21.347 }, 00:10:21.347 { 00:10:21.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.347 "dma_device_type": 2 00:10:21.347 }, 00:10:21.347 { 00:10:21.347 "dma_device_id": "system", 00:10:21.347 "dma_device_type": 1 00:10:21.347 }, 00:10:21.347 { 00:10:21.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.347 "dma_device_type": 2 00:10:21.347 } 00:10:21.347 ], 00:10:21.347 "driver_specific": { 00:10:21.347 "raid": { 00:10:21.347 "uuid": "ae3c4609-43b7-4baf-8883-0e5b66daeaa3", 00:10:21.347 "strip_size_kb": 64, 00:10:21.347 "state": "online", 00:10:21.347 "raid_level": "raid0", 00:10:21.347 "superblock": true, 00:10:21.347 "num_base_bdevs": 4, 00:10:21.347 "num_base_bdevs_discovered": 4, 00:10:21.347 "num_base_bdevs_operational": 4, 00:10:21.347 "base_bdevs_list": [ 00:10:21.347 { 00:10:21.347 "name": "NewBaseBdev", 00:10:21.347 "uuid": "46772a80-16fa-4a06-ad98-a596832860a8", 00:10:21.347 "is_configured": true, 00:10:21.347 "data_offset": 2048, 00:10:21.347 "data_size": 63488 00:10:21.347 }, 00:10:21.347 { 00:10:21.347 "name": "BaseBdev2", 00:10:21.347 "uuid": "961d3340-e8d5-4c61-aaec-83aaa504c7a6", 00:10:21.347 "is_configured": true, 00:10:21.347 "data_offset": 2048, 00:10:21.347 "data_size": 63488 00:10:21.347 }, 00:10:21.347 { 00:10:21.347 "name": "BaseBdev3", 00:10:21.347 "uuid": "a34cfb39-5187-4533-85f9-914b6431a951", 00:10:21.347 "is_configured": true, 00:10:21.347 "data_offset": 2048, 00:10:21.347 "data_size": 63488 00:10:21.347 }, 00:10:21.347 { 00:10:21.347 "name": "BaseBdev4", 00:10:21.347 "uuid": "7378e1e5-89e2-4df3-9de6-5778819c7b01", 00:10:21.347 "is_configured": true, 00:10:21.347 "data_offset": 2048, 00:10:21.347 "data_size": 63488 00:10:21.347 } 00:10:21.347 ] 00:10:21.347 } 00:10:21.347 } 00:10:21.347 }' 00:10:21.347 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:21.347 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:21.347 BaseBdev2 00:10:21.347 BaseBdev3 00:10:21.347 BaseBdev4' 00:10:21.347 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.347 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:21.347 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.347 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.347 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:21.347 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.347 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.347 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.347 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.347 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.347 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.347 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.347 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:21.348 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.348 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.348 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.348 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.348 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.348 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.348 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:21.348 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.348 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.348 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.348 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.607 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.607 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.607 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.607 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:21.607 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.607 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.607 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.607 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.607 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.607 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.607 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:21.607 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.607 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.607 [2024-12-06 09:47:46.684733] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:21.607 [2024-12-06 09:47:46.684817] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:21.607 [2024-12-06 09:47:46.684935] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:21.607 [2024-12-06 09:47:46.685065] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:21.607 [2024-12-06 09:47:46.685117] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:21.607 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.607 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 69993 00:10:21.607 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 69993 ']' 00:10:21.607 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 69993 00:10:21.607 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:21.607 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:21.607 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69993 00:10:21.607 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:21.607 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:21.607 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69993' 00:10:21.607 killing process with pid 69993 00:10:21.607 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 69993 00:10:21.607 [2024-12-06 09:47:46.733091] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:21.607 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 69993 00:10:21.866 [2024-12-06 09:47:47.122438] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:23.242 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:23.242 00:10:23.242 real 0m11.230s 00:10:23.242 user 0m17.775s 00:10:23.242 sys 0m2.028s 00:10:23.242 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.242 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.242 ************************************ 00:10:23.242 END TEST raid_state_function_test_sb 00:10:23.242 ************************************ 00:10:23.242 09:47:48 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:23.242 09:47:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:23.242 09:47:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.242 09:47:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:23.242 ************************************ 00:10:23.242 START TEST raid_superblock_test 00:10:23.242 ************************************ 00:10:23.242 09:47:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:10:23.242 09:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:23.242 09:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:23.242 09:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:23.242 09:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:23.242 09:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:23.242 09:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:23.242 09:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:23.242 09:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:23.242 09:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:23.242 09:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:23.242 09:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:23.242 09:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:23.242 09:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:23.242 09:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:23.242 09:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:23.242 09:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:23.242 09:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70658 00:10:23.242 09:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:23.242 09:47:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70658 00:10:23.242 09:47:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70658 ']' 00:10:23.242 09:47:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.242 09:47:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:23.242 09:47:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.242 09:47:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:23.242 09:47:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.242 [2024-12-06 09:47:48.413991] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:10:23.242 [2024-12-06 09:47:48.414672] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70658 ] 00:10:23.501 [2024-12-06 09:47:48.585017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.501 [2024-12-06 09:47:48.701702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.760 [2024-12-06 09:47:48.903460] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.760 [2024-12-06 09:47:48.903564] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.019 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.019 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:24.019 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:24.019 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:24.019 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:24.019 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:24.019 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:24.019 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:24.019 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:24.019 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:24.019 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:24.019 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.019 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.019 malloc1 00:10:24.019 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.019 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:24.019 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.019 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.278 [2024-12-06 09:47:49.293721] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:24.279 [2024-12-06 09:47:49.293840] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.279 [2024-12-06 09:47:49.293878] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:24.279 [2024-12-06 09:47:49.293907] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.279 [2024-12-06 09:47:49.295938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.279 [2024-12-06 09:47:49.295977] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:24.279 pt1 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.279 malloc2 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.279 [2024-12-06 09:47:49.346120] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:24.279 [2024-12-06 09:47:49.346230] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.279 [2024-12-06 09:47:49.346272] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:24.279 [2024-12-06 09:47:49.346299] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.279 [2024-12-06 09:47:49.348368] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.279 [2024-12-06 09:47:49.348441] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:24.279 pt2 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.279 malloc3 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.279 [2024-12-06 09:47:49.414221] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:24.279 [2024-12-06 09:47:49.414333] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.279 [2024-12-06 09:47:49.414371] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:24.279 [2024-12-06 09:47:49.414400] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.279 [2024-12-06 09:47:49.416498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.279 [2024-12-06 09:47:49.416583] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:24.279 pt3 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.279 malloc4 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.279 [2024-12-06 09:47:49.469690] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:24.279 [2024-12-06 09:47:49.469796] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.279 [2024-12-06 09:47:49.469834] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:24.279 [2024-12-06 09:47:49.469862] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.279 [2024-12-06 09:47:49.471936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.279 [2024-12-06 09:47:49.472025] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:24.279 pt4 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.279 [2024-12-06 09:47:49.481702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:24.279 [2024-12-06 09:47:49.483503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:24.279 [2024-12-06 09:47:49.483624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:24.279 [2024-12-06 09:47:49.483693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:24.279 [2024-12-06 09:47:49.483932] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:24.279 [2024-12-06 09:47:49.483981] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:24.279 [2024-12-06 09:47:49.484257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:24.279 [2024-12-06 09:47:49.484463] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:24.279 [2024-12-06 09:47:49.484509] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:24.279 [2024-12-06 09:47:49.484695] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.279 "name": "raid_bdev1", 00:10:24.279 "uuid": "c16d8ada-6ccb-4e13-868e-224c59aa8e8b", 00:10:24.279 "strip_size_kb": 64, 00:10:24.279 "state": "online", 00:10:24.279 "raid_level": "raid0", 00:10:24.279 "superblock": true, 00:10:24.279 "num_base_bdevs": 4, 00:10:24.279 "num_base_bdevs_discovered": 4, 00:10:24.279 "num_base_bdevs_operational": 4, 00:10:24.279 "base_bdevs_list": [ 00:10:24.279 { 00:10:24.279 "name": "pt1", 00:10:24.279 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:24.279 "is_configured": true, 00:10:24.279 "data_offset": 2048, 00:10:24.279 "data_size": 63488 00:10:24.279 }, 00:10:24.279 { 00:10:24.279 "name": "pt2", 00:10:24.279 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:24.279 "is_configured": true, 00:10:24.279 "data_offset": 2048, 00:10:24.279 "data_size": 63488 00:10:24.279 }, 00:10:24.279 { 00:10:24.279 "name": "pt3", 00:10:24.279 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:24.279 "is_configured": true, 00:10:24.279 "data_offset": 2048, 00:10:24.279 "data_size": 63488 00:10:24.279 }, 00:10:24.279 { 00:10:24.279 "name": "pt4", 00:10:24.279 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:24.279 "is_configured": true, 00:10:24.279 "data_offset": 2048, 00:10:24.279 "data_size": 63488 00:10:24.279 } 00:10:24.279 ] 00:10:24.279 }' 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.279 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.845 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:24.845 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:24.845 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:24.845 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:24.845 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:24.845 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:24.845 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:24.845 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.845 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.845 09:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:24.845 [2024-12-06 09:47:49.965276] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:24.845 09:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.845 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:24.845 "name": "raid_bdev1", 00:10:24.845 "aliases": [ 00:10:24.845 "c16d8ada-6ccb-4e13-868e-224c59aa8e8b" 00:10:24.845 ], 00:10:24.845 "product_name": "Raid Volume", 00:10:24.845 "block_size": 512, 00:10:24.845 "num_blocks": 253952, 00:10:24.845 "uuid": "c16d8ada-6ccb-4e13-868e-224c59aa8e8b", 00:10:24.845 "assigned_rate_limits": { 00:10:24.845 "rw_ios_per_sec": 0, 00:10:24.845 "rw_mbytes_per_sec": 0, 00:10:24.845 "r_mbytes_per_sec": 0, 00:10:24.845 "w_mbytes_per_sec": 0 00:10:24.845 }, 00:10:24.845 "claimed": false, 00:10:24.845 "zoned": false, 00:10:24.845 "supported_io_types": { 00:10:24.845 "read": true, 00:10:24.845 "write": true, 00:10:24.845 "unmap": true, 00:10:24.845 "flush": true, 00:10:24.845 "reset": true, 00:10:24.845 "nvme_admin": false, 00:10:24.845 "nvme_io": false, 00:10:24.845 "nvme_io_md": false, 00:10:24.845 "write_zeroes": true, 00:10:24.845 "zcopy": false, 00:10:24.845 "get_zone_info": false, 00:10:24.846 "zone_management": false, 00:10:24.846 "zone_append": false, 00:10:24.846 "compare": false, 00:10:24.846 "compare_and_write": false, 00:10:24.846 "abort": false, 00:10:24.846 "seek_hole": false, 00:10:24.846 "seek_data": false, 00:10:24.846 "copy": false, 00:10:24.846 "nvme_iov_md": false 00:10:24.846 }, 00:10:24.846 "memory_domains": [ 00:10:24.846 { 00:10:24.846 "dma_device_id": "system", 00:10:24.846 "dma_device_type": 1 00:10:24.846 }, 00:10:24.846 { 00:10:24.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.846 "dma_device_type": 2 00:10:24.846 }, 00:10:24.846 { 00:10:24.846 "dma_device_id": "system", 00:10:24.846 "dma_device_type": 1 00:10:24.846 }, 00:10:24.846 { 00:10:24.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.846 "dma_device_type": 2 00:10:24.846 }, 00:10:24.846 { 00:10:24.846 "dma_device_id": "system", 00:10:24.846 "dma_device_type": 1 00:10:24.846 }, 00:10:24.846 { 00:10:24.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.846 "dma_device_type": 2 00:10:24.846 }, 00:10:24.846 { 00:10:24.846 "dma_device_id": "system", 00:10:24.846 "dma_device_type": 1 00:10:24.846 }, 00:10:24.846 { 00:10:24.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.846 "dma_device_type": 2 00:10:24.846 } 00:10:24.846 ], 00:10:24.846 "driver_specific": { 00:10:24.846 "raid": { 00:10:24.846 "uuid": "c16d8ada-6ccb-4e13-868e-224c59aa8e8b", 00:10:24.846 "strip_size_kb": 64, 00:10:24.846 "state": "online", 00:10:24.846 "raid_level": "raid0", 00:10:24.846 "superblock": true, 00:10:24.846 "num_base_bdevs": 4, 00:10:24.846 "num_base_bdevs_discovered": 4, 00:10:24.846 "num_base_bdevs_operational": 4, 00:10:24.846 "base_bdevs_list": [ 00:10:24.846 { 00:10:24.846 "name": "pt1", 00:10:24.846 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:24.846 "is_configured": true, 00:10:24.846 "data_offset": 2048, 00:10:24.846 "data_size": 63488 00:10:24.846 }, 00:10:24.846 { 00:10:24.846 "name": "pt2", 00:10:24.846 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:24.846 "is_configured": true, 00:10:24.846 "data_offset": 2048, 00:10:24.846 "data_size": 63488 00:10:24.846 }, 00:10:24.846 { 00:10:24.846 "name": "pt3", 00:10:24.846 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:24.846 "is_configured": true, 00:10:24.846 "data_offset": 2048, 00:10:24.846 "data_size": 63488 00:10:24.846 }, 00:10:24.846 { 00:10:24.846 "name": "pt4", 00:10:24.846 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:24.846 "is_configured": true, 00:10:24.846 "data_offset": 2048, 00:10:24.846 "data_size": 63488 00:10:24.846 } 00:10:24.846 ] 00:10:24.846 } 00:10:24.846 } 00:10:24.846 }' 00:10:24.846 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:24.846 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:24.846 pt2 00:10:24.846 pt3 00:10:24.846 pt4' 00:10:24.846 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.846 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:24.846 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:24.846 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:24.846 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:24.846 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.846 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.105 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.105 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.105 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.105 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.105 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.105 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:25.105 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.105 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.105 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.105 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.105 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.105 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.105 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:25.105 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.105 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.105 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.106 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.106 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.106 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.106 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.106 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:25.106 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.106 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.106 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.106 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.106 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.106 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.106 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:25.106 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.106 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.106 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:25.106 [2024-12-06 09:47:50.316582] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.106 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.106 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c16d8ada-6ccb-4e13-868e-224c59aa8e8b 00:10:25.106 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c16d8ada-6ccb-4e13-868e-224c59aa8e8b ']' 00:10:25.106 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:25.106 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.106 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.106 [2024-12-06 09:47:50.368195] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:25.106 [2024-12-06 09:47:50.368257] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:25.106 [2024-12-06 09:47:50.368416] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.106 [2024-12-06 09:47:50.368505] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:25.106 [2024-12-06 09:47:50.368567] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:25.106 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.365 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.365 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:25.365 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.365 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.365 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.365 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:25.365 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:25.365 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:25.365 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:25.365 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.365 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.365 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.365 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:25.365 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:25.365 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.365 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.365 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.365 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:25.365 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:25.365 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.365 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.365 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.365 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:25.365 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:25.365 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.365 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.365 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.365 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:25.365 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:25.365 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.365 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.365 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.365 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.366 [2024-12-06 09:47:50.527956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:25.366 [2024-12-06 09:47:50.529880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:25.366 [2024-12-06 09:47:50.529975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:25.366 [2024-12-06 09:47:50.530028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:25.366 [2024-12-06 09:47:50.530106] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:25.366 [2024-12-06 09:47:50.530203] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:25.366 [2024-12-06 09:47:50.530258] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:25.366 [2024-12-06 09:47:50.530329] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:25.366 [2024-12-06 09:47:50.530376] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:25.366 [2024-12-06 09:47:50.530407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:25.366 request: 00:10:25.366 { 00:10:25.366 "name": "raid_bdev1", 00:10:25.366 "raid_level": "raid0", 00:10:25.366 "base_bdevs": [ 00:10:25.366 "malloc1", 00:10:25.366 "malloc2", 00:10:25.366 "malloc3", 00:10:25.366 "malloc4" 00:10:25.366 ], 00:10:25.366 "strip_size_kb": 64, 00:10:25.366 "superblock": false, 00:10:25.366 "method": "bdev_raid_create", 00:10:25.366 "req_id": 1 00:10:25.366 } 00:10:25.366 Got JSON-RPC error response 00:10:25.366 response: 00:10:25.366 { 00:10:25.366 "code": -17, 00:10:25.366 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:25.366 } 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.366 [2024-12-06 09:47:50.595787] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:25.366 [2024-12-06 09:47:50.595892] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.366 [2024-12-06 09:47:50.595915] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:25.366 [2024-12-06 09:47:50.595926] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.366 [2024-12-06 09:47:50.598069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.366 [2024-12-06 09:47:50.598112] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:25.366 [2024-12-06 09:47:50.598236] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:25.366 [2024-12-06 09:47:50.598301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:25.366 pt1 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.366 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.625 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.625 "name": "raid_bdev1", 00:10:25.625 "uuid": "c16d8ada-6ccb-4e13-868e-224c59aa8e8b", 00:10:25.625 "strip_size_kb": 64, 00:10:25.625 "state": "configuring", 00:10:25.625 "raid_level": "raid0", 00:10:25.625 "superblock": true, 00:10:25.625 "num_base_bdevs": 4, 00:10:25.625 "num_base_bdevs_discovered": 1, 00:10:25.625 "num_base_bdevs_operational": 4, 00:10:25.625 "base_bdevs_list": [ 00:10:25.625 { 00:10:25.625 "name": "pt1", 00:10:25.625 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:25.625 "is_configured": true, 00:10:25.625 "data_offset": 2048, 00:10:25.625 "data_size": 63488 00:10:25.625 }, 00:10:25.625 { 00:10:25.625 "name": null, 00:10:25.625 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:25.625 "is_configured": false, 00:10:25.625 "data_offset": 2048, 00:10:25.625 "data_size": 63488 00:10:25.625 }, 00:10:25.625 { 00:10:25.625 "name": null, 00:10:25.625 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:25.625 "is_configured": false, 00:10:25.625 "data_offset": 2048, 00:10:25.625 "data_size": 63488 00:10:25.625 }, 00:10:25.625 { 00:10:25.625 "name": null, 00:10:25.625 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:25.625 "is_configured": false, 00:10:25.625 "data_offset": 2048, 00:10:25.625 "data_size": 63488 00:10:25.625 } 00:10:25.625 ] 00:10:25.625 }' 00:10:25.625 09:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.625 09:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.884 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:25.884 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:25.884 09:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.884 09:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.884 [2024-12-06 09:47:51.063002] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:25.884 [2024-12-06 09:47:51.063140] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.884 [2024-12-06 09:47:51.063199] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:25.884 [2024-12-06 09:47:51.063241] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.884 [2024-12-06 09:47:51.063715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.884 [2024-12-06 09:47:51.063784] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:25.884 [2024-12-06 09:47:51.063918] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:25.884 [2024-12-06 09:47:51.063982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:25.884 pt2 00:10:25.884 09:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.884 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:25.884 09:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.884 09:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.884 [2024-12-06 09:47:51.074980] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:25.884 09:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.885 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:25.885 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.885 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.885 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:25.885 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.885 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.885 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.885 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.885 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.885 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.885 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.885 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.885 09:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.885 09:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.885 09:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.885 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.885 "name": "raid_bdev1", 00:10:25.885 "uuid": "c16d8ada-6ccb-4e13-868e-224c59aa8e8b", 00:10:25.885 "strip_size_kb": 64, 00:10:25.885 "state": "configuring", 00:10:25.885 "raid_level": "raid0", 00:10:25.885 "superblock": true, 00:10:25.885 "num_base_bdevs": 4, 00:10:25.885 "num_base_bdevs_discovered": 1, 00:10:25.885 "num_base_bdevs_operational": 4, 00:10:25.885 "base_bdevs_list": [ 00:10:25.885 { 00:10:25.885 "name": "pt1", 00:10:25.885 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:25.885 "is_configured": true, 00:10:25.885 "data_offset": 2048, 00:10:25.885 "data_size": 63488 00:10:25.885 }, 00:10:25.885 { 00:10:25.885 "name": null, 00:10:25.885 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:25.885 "is_configured": false, 00:10:25.885 "data_offset": 0, 00:10:25.885 "data_size": 63488 00:10:25.885 }, 00:10:25.885 { 00:10:25.885 "name": null, 00:10:25.885 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:25.885 "is_configured": false, 00:10:25.885 "data_offset": 2048, 00:10:25.885 "data_size": 63488 00:10:25.885 }, 00:10:25.885 { 00:10:25.885 "name": null, 00:10:25.885 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:25.885 "is_configured": false, 00:10:25.885 "data_offset": 2048, 00:10:25.885 "data_size": 63488 00:10:25.885 } 00:10:25.885 ] 00:10:25.885 }' 00:10:25.885 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.885 09:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.480 [2024-12-06 09:47:51.514265] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:26.480 [2024-12-06 09:47:51.514381] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.480 [2024-12-06 09:47:51.514418] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:26.480 [2024-12-06 09:47:51.514444] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.480 [2024-12-06 09:47:51.514921] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.480 [2024-12-06 09:47:51.514981] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:26.480 [2024-12-06 09:47:51.515096] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:26.480 [2024-12-06 09:47:51.515159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:26.480 pt2 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.480 [2024-12-06 09:47:51.526202] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:26.480 [2024-12-06 09:47:51.526280] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.480 [2024-12-06 09:47:51.526313] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:26.480 [2024-12-06 09:47:51.526338] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.480 [2024-12-06 09:47:51.526695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.480 [2024-12-06 09:47:51.526746] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:26.480 [2024-12-06 09:47:51.526829] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:26.480 [2024-12-06 09:47:51.526878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:26.480 pt3 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.480 [2024-12-06 09:47:51.538157] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:26.480 [2024-12-06 09:47:51.538246] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.480 [2024-12-06 09:47:51.538276] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:26.480 [2024-12-06 09:47:51.538301] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.480 [2024-12-06 09:47:51.538657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.480 [2024-12-06 09:47:51.538708] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:26.480 [2024-12-06 09:47:51.538787] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:26.480 [2024-12-06 09:47:51.538833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:26.480 [2024-12-06 09:47:51.538963] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:26.480 [2024-12-06 09:47:51.538998] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:26.480 [2024-12-06 09:47:51.539244] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:26.480 [2024-12-06 09:47:51.539417] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:26.480 [2024-12-06 09:47:51.539461] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:26.480 [2024-12-06 09:47:51.539608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.480 pt4 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.480 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.480 "name": "raid_bdev1", 00:10:26.480 "uuid": "c16d8ada-6ccb-4e13-868e-224c59aa8e8b", 00:10:26.480 "strip_size_kb": 64, 00:10:26.480 "state": "online", 00:10:26.480 "raid_level": "raid0", 00:10:26.480 "superblock": true, 00:10:26.480 "num_base_bdevs": 4, 00:10:26.480 "num_base_bdevs_discovered": 4, 00:10:26.480 "num_base_bdevs_operational": 4, 00:10:26.480 "base_bdevs_list": [ 00:10:26.480 { 00:10:26.480 "name": "pt1", 00:10:26.480 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:26.480 "is_configured": true, 00:10:26.480 "data_offset": 2048, 00:10:26.480 "data_size": 63488 00:10:26.480 }, 00:10:26.480 { 00:10:26.480 "name": "pt2", 00:10:26.480 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:26.480 "is_configured": true, 00:10:26.480 "data_offset": 2048, 00:10:26.480 "data_size": 63488 00:10:26.480 }, 00:10:26.480 { 00:10:26.480 "name": "pt3", 00:10:26.480 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:26.480 "is_configured": true, 00:10:26.480 "data_offset": 2048, 00:10:26.480 "data_size": 63488 00:10:26.480 }, 00:10:26.480 { 00:10:26.480 "name": "pt4", 00:10:26.480 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:26.480 "is_configured": true, 00:10:26.480 "data_offset": 2048, 00:10:26.480 "data_size": 63488 00:10:26.480 } 00:10:26.480 ] 00:10:26.480 }' 00:10:26.481 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.481 09:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.741 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:26.741 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:26.741 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:26.741 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:26.741 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:26.741 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:26.741 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:26.741 09:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:26.741 09:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.741 09:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.741 [2024-12-06 09:47:51.997741] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:27.001 09:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.001 09:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:27.001 "name": "raid_bdev1", 00:10:27.001 "aliases": [ 00:10:27.001 "c16d8ada-6ccb-4e13-868e-224c59aa8e8b" 00:10:27.001 ], 00:10:27.001 "product_name": "Raid Volume", 00:10:27.001 "block_size": 512, 00:10:27.001 "num_blocks": 253952, 00:10:27.001 "uuid": "c16d8ada-6ccb-4e13-868e-224c59aa8e8b", 00:10:27.001 "assigned_rate_limits": { 00:10:27.001 "rw_ios_per_sec": 0, 00:10:27.001 "rw_mbytes_per_sec": 0, 00:10:27.001 "r_mbytes_per_sec": 0, 00:10:27.001 "w_mbytes_per_sec": 0 00:10:27.001 }, 00:10:27.001 "claimed": false, 00:10:27.001 "zoned": false, 00:10:27.001 "supported_io_types": { 00:10:27.001 "read": true, 00:10:27.001 "write": true, 00:10:27.001 "unmap": true, 00:10:27.001 "flush": true, 00:10:27.001 "reset": true, 00:10:27.001 "nvme_admin": false, 00:10:27.001 "nvme_io": false, 00:10:27.001 "nvme_io_md": false, 00:10:27.001 "write_zeroes": true, 00:10:27.001 "zcopy": false, 00:10:27.001 "get_zone_info": false, 00:10:27.001 "zone_management": false, 00:10:27.001 "zone_append": false, 00:10:27.001 "compare": false, 00:10:27.001 "compare_and_write": false, 00:10:27.001 "abort": false, 00:10:27.001 "seek_hole": false, 00:10:27.001 "seek_data": false, 00:10:27.001 "copy": false, 00:10:27.001 "nvme_iov_md": false 00:10:27.001 }, 00:10:27.001 "memory_domains": [ 00:10:27.001 { 00:10:27.001 "dma_device_id": "system", 00:10:27.001 "dma_device_type": 1 00:10:27.001 }, 00:10:27.001 { 00:10:27.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.001 "dma_device_type": 2 00:10:27.001 }, 00:10:27.001 { 00:10:27.001 "dma_device_id": "system", 00:10:27.001 "dma_device_type": 1 00:10:27.001 }, 00:10:27.001 { 00:10:27.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.001 "dma_device_type": 2 00:10:27.001 }, 00:10:27.001 { 00:10:27.001 "dma_device_id": "system", 00:10:27.001 "dma_device_type": 1 00:10:27.001 }, 00:10:27.001 { 00:10:27.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.001 "dma_device_type": 2 00:10:27.001 }, 00:10:27.001 { 00:10:27.001 "dma_device_id": "system", 00:10:27.001 "dma_device_type": 1 00:10:27.001 }, 00:10:27.001 { 00:10:27.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.001 "dma_device_type": 2 00:10:27.001 } 00:10:27.001 ], 00:10:27.001 "driver_specific": { 00:10:27.001 "raid": { 00:10:27.001 "uuid": "c16d8ada-6ccb-4e13-868e-224c59aa8e8b", 00:10:27.001 "strip_size_kb": 64, 00:10:27.001 "state": "online", 00:10:27.001 "raid_level": "raid0", 00:10:27.001 "superblock": true, 00:10:27.001 "num_base_bdevs": 4, 00:10:27.001 "num_base_bdevs_discovered": 4, 00:10:27.001 "num_base_bdevs_operational": 4, 00:10:27.001 "base_bdevs_list": [ 00:10:27.001 { 00:10:27.001 "name": "pt1", 00:10:27.001 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:27.001 "is_configured": true, 00:10:27.001 "data_offset": 2048, 00:10:27.001 "data_size": 63488 00:10:27.001 }, 00:10:27.001 { 00:10:27.001 "name": "pt2", 00:10:27.001 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:27.001 "is_configured": true, 00:10:27.001 "data_offset": 2048, 00:10:27.001 "data_size": 63488 00:10:27.001 }, 00:10:27.001 { 00:10:27.001 "name": "pt3", 00:10:27.001 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:27.001 "is_configured": true, 00:10:27.001 "data_offset": 2048, 00:10:27.001 "data_size": 63488 00:10:27.001 }, 00:10:27.001 { 00:10:27.001 "name": "pt4", 00:10:27.001 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:27.001 "is_configured": true, 00:10:27.001 "data_offset": 2048, 00:10:27.001 "data_size": 63488 00:10:27.001 } 00:10:27.001 ] 00:10:27.001 } 00:10:27.001 } 00:10:27.001 }' 00:10:27.001 09:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:27.001 09:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:27.001 pt2 00:10:27.001 pt3 00:10:27.002 pt4' 00:10:27.002 09:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.002 09:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:27.002 09:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.002 09:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.002 09:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:27.002 09:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.002 09:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.002 09:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.002 09:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.002 09:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.002 09:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.002 09:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:27.002 09:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.002 09:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.002 09:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.002 09:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.002 09:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.002 09:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.002 09:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.002 09:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.002 09:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:27.002 09:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.002 09:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.002 09:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.002 09:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.002 09:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.002 09:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.002 09:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:27.002 09:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.002 09:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.002 09:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.002 09:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.262 09:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.262 09:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.262 09:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:27.262 09:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:27.262 09:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.262 09:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.262 [2024-12-06 09:47:52.293215] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:27.262 09:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.262 09:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c16d8ada-6ccb-4e13-868e-224c59aa8e8b '!=' c16d8ada-6ccb-4e13-868e-224c59aa8e8b ']' 00:10:27.262 09:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:27.262 09:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:27.262 09:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:27.262 09:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70658 00:10:27.262 09:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70658 ']' 00:10:27.262 09:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70658 00:10:27.262 09:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:27.262 09:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:27.262 09:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70658 00:10:27.262 09:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:27.262 09:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:27.262 09:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70658' 00:10:27.262 killing process with pid 70658 00:10:27.262 09:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70658 00:10:27.262 [2024-12-06 09:47:52.378164] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:27.262 [2024-12-06 09:47:52.378318] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:27.262 09:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70658 00:10:27.262 [2024-12-06 09:47:52.378428] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:27.262 [2024-12-06 09:47:52.378439] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:27.521 [2024-12-06 09:47:52.767915] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:28.902 09:47:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:28.902 00:10:28.902 real 0m5.556s 00:10:28.902 user 0m7.966s 00:10:28.902 sys 0m0.964s 00:10:28.902 09:47:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.902 ************************************ 00:10:28.902 END TEST raid_superblock_test 00:10:28.902 ************************************ 00:10:28.902 09:47:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.902 09:47:53 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:28.902 09:47:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:28.902 09:47:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.902 09:47:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:28.902 ************************************ 00:10:28.902 START TEST raid_read_error_test 00:10:28.902 ************************************ 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Q1oDbEq5U7 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70922 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70922 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 70922 ']' 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:28.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:28.902 09:47:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.902 [2024-12-06 09:47:54.052876] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:10:28.902 [2024-12-06 09:47:54.053057] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70922 ] 00:10:29.162 [2024-12-06 09:47:54.228989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.162 [2024-12-06 09:47:54.343645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.421 [2024-12-06 09:47:54.543759] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.421 [2024-12-06 09:47:54.543935] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.681 09:47:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:29.681 09:47:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:29.681 09:47:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:29.681 09:47:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:29.681 09:47:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.681 09:47:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.681 BaseBdev1_malloc 00:10:29.681 09:47:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.681 09:47:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:29.681 09:47:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.681 09:47:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.681 true 00:10:29.681 09:47:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.681 09:47:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:29.681 09:47:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.681 09:47:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.942 [2024-12-06 09:47:54.955426] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:29.942 [2024-12-06 09:47:54.955567] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.942 [2024-12-06 09:47:54.955608] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:29.942 [2024-12-06 09:47:54.955643] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.942 [2024-12-06 09:47:54.957835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.942 [2024-12-06 09:47:54.957926] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:29.942 BaseBdev1 00:10:29.942 09:47:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.942 09:47:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:29.942 09:47:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:29.942 09:47:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.942 09:47:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.942 BaseBdev2_malloc 00:10:29.942 09:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.942 09:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:29.942 09:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.942 09:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.942 true 00:10:29.942 09:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.942 09:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:29.942 09:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.942 09:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.942 [2024-12-06 09:47:55.025613] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:29.942 [2024-12-06 09:47:55.025714] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.942 [2024-12-06 09:47:55.025769] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:29.942 [2024-12-06 09:47:55.025800] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.942 [2024-12-06 09:47:55.027996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.942 [2024-12-06 09:47:55.028077] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:29.942 BaseBdev2 00:10:29.942 09:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.942 09:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:29.942 09:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:29.942 09:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.942 09:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.942 BaseBdev3_malloc 00:10:29.942 09:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.942 09:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:29.942 09:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.942 09:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.942 true 00:10:29.942 09:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.942 09:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:29.942 09:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.942 09:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.942 [2024-12-06 09:47:55.111039] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:29.942 [2024-12-06 09:47:55.111096] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.942 [2024-12-06 09:47:55.111118] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:29.942 [2024-12-06 09:47:55.111128] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.942 [2024-12-06 09:47:55.113303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.942 [2024-12-06 09:47:55.113343] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:29.942 BaseBdev3 00:10:29.942 09:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.942 09:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:29.942 09:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:29.942 09:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.942 09:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.942 BaseBdev4_malloc 00:10:29.942 09:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.942 09:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:29.942 09:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.942 09:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.942 true 00:10:29.942 09:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.942 09:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:29.942 09:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.942 09:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.942 [2024-12-06 09:47:55.178716] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:29.942 [2024-12-06 09:47:55.178815] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.942 [2024-12-06 09:47:55.178882] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:29.942 [2024-12-06 09:47:55.178919] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.942 [2024-12-06 09:47:55.181088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.942 [2024-12-06 09:47:55.181176] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:29.942 BaseBdev4 00:10:29.942 09:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.943 09:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:29.943 09:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.943 09:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.943 [2024-12-06 09:47:55.190751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:29.943 [2024-12-06 09:47:55.192629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:29.943 [2024-12-06 09:47:55.192746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:29.943 [2024-12-06 09:47:55.192843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:29.943 [2024-12-06 09:47:55.193095] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:29.943 [2024-12-06 09:47:55.193164] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:29.943 [2024-12-06 09:47:55.193437] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:29.943 [2024-12-06 09:47:55.193635] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:29.943 [2024-12-06 09:47:55.193677] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:29.943 [2024-12-06 09:47:55.193860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.943 09:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.943 09:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:29.943 09:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:29.943 09:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.943 09:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:29.943 09:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.943 09:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.943 09:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.943 09:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.943 09:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.943 09:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.943 09:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.943 09:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.943 09:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.943 09:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.202 09:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.202 09:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.202 "name": "raid_bdev1", 00:10:30.202 "uuid": "d2685f68-676e-4475-9bec-dedc1bd7372e", 00:10:30.202 "strip_size_kb": 64, 00:10:30.202 "state": "online", 00:10:30.202 "raid_level": "raid0", 00:10:30.202 "superblock": true, 00:10:30.202 "num_base_bdevs": 4, 00:10:30.202 "num_base_bdevs_discovered": 4, 00:10:30.202 "num_base_bdevs_operational": 4, 00:10:30.202 "base_bdevs_list": [ 00:10:30.202 { 00:10:30.202 "name": "BaseBdev1", 00:10:30.202 "uuid": "3a0a98b2-8cec-51fd-9cf5-8d974f883377", 00:10:30.202 "is_configured": true, 00:10:30.202 "data_offset": 2048, 00:10:30.202 "data_size": 63488 00:10:30.202 }, 00:10:30.202 { 00:10:30.202 "name": "BaseBdev2", 00:10:30.202 "uuid": "b552ca14-5834-54f6-94e6-9bf663199b28", 00:10:30.202 "is_configured": true, 00:10:30.202 "data_offset": 2048, 00:10:30.202 "data_size": 63488 00:10:30.202 }, 00:10:30.202 { 00:10:30.202 "name": "BaseBdev3", 00:10:30.202 "uuid": "a98acbc1-7756-534e-8f10-2b8f376254fd", 00:10:30.202 "is_configured": true, 00:10:30.202 "data_offset": 2048, 00:10:30.202 "data_size": 63488 00:10:30.202 }, 00:10:30.202 { 00:10:30.202 "name": "BaseBdev4", 00:10:30.202 "uuid": "53033158-5557-50e9-befc-2e0f569fa6f7", 00:10:30.202 "is_configured": true, 00:10:30.202 "data_offset": 2048, 00:10:30.202 "data_size": 63488 00:10:30.202 } 00:10:30.202 ] 00:10:30.202 }' 00:10:30.202 09:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.202 09:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.461 09:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:30.461 09:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:30.720 [2024-12-06 09:47:55.759058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:31.713 09:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:31.713 09:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.713 09:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.713 09:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.713 09:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:31.713 09:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:31.713 09:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:31.713 09:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:31.713 09:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:31.713 09:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.713 09:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.713 09:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.713 09:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.713 09:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.713 09:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.713 09:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.713 09:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.713 09:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.713 09:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:31.713 09:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.713 09:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.713 09:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.713 09:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.713 "name": "raid_bdev1", 00:10:31.713 "uuid": "d2685f68-676e-4475-9bec-dedc1bd7372e", 00:10:31.713 "strip_size_kb": 64, 00:10:31.713 "state": "online", 00:10:31.713 "raid_level": "raid0", 00:10:31.713 "superblock": true, 00:10:31.713 "num_base_bdevs": 4, 00:10:31.713 "num_base_bdevs_discovered": 4, 00:10:31.713 "num_base_bdevs_operational": 4, 00:10:31.713 "base_bdevs_list": [ 00:10:31.713 { 00:10:31.713 "name": "BaseBdev1", 00:10:31.713 "uuid": "3a0a98b2-8cec-51fd-9cf5-8d974f883377", 00:10:31.713 "is_configured": true, 00:10:31.713 "data_offset": 2048, 00:10:31.713 "data_size": 63488 00:10:31.713 }, 00:10:31.713 { 00:10:31.713 "name": "BaseBdev2", 00:10:31.713 "uuid": "b552ca14-5834-54f6-94e6-9bf663199b28", 00:10:31.713 "is_configured": true, 00:10:31.713 "data_offset": 2048, 00:10:31.713 "data_size": 63488 00:10:31.713 }, 00:10:31.713 { 00:10:31.713 "name": "BaseBdev3", 00:10:31.713 "uuid": "a98acbc1-7756-534e-8f10-2b8f376254fd", 00:10:31.713 "is_configured": true, 00:10:31.713 "data_offset": 2048, 00:10:31.713 "data_size": 63488 00:10:31.713 }, 00:10:31.713 { 00:10:31.713 "name": "BaseBdev4", 00:10:31.713 "uuid": "53033158-5557-50e9-befc-2e0f569fa6f7", 00:10:31.713 "is_configured": true, 00:10:31.713 "data_offset": 2048, 00:10:31.713 "data_size": 63488 00:10:31.713 } 00:10:31.713 ] 00:10:31.713 }' 00:10:31.713 09:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.713 09:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.973 09:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:31.973 09:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.973 09:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.973 [2024-12-06 09:47:57.159280] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:31.973 [2024-12-06 09:47:57.159363] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:31.973 [2024-12-06 09:47:57.162046] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:31.973 [2024-12-06 09:47:57.162159] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.973 [2024-12-06 09:47:57.162223] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:31.973 [2024-12-06 09:47:57.162343] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:31.973 { 00:10:31.973 "results": [ 00:10:31.973 { 00:10:31.973 "job": "raid_bdev1", 00:10:31.973 "core_mask": "0x1", 00:10:31.973 "workload": "randrw", 00:10:31.973 "percentage": 50, 00:10:31.973 "status": "finished", 00:10:31.973 "queue_depth": 1, 00:10:31.973 "io_size": 131072, 00:10:31.973 "runtime": 1.401239, 00:10:31.973 "iops": 15419.21114099736, 00:10:31.973 "mibps": 1927.40139262467, 00:10:31.973 "io_failed": 1, 00:10:31.973 "io_timeout": 0, 00:10:31.973 "avg_latency_us": 90.06427393031088, 00:10:31.973 "min_latency_us": 25.7117903930131, 00:10:31.973 "max_latency_us": 1380.8349344978167 00:10:31.973 } 00:10:31.973 ], 00:10:31.973 "core_count": 1 00:10:31.973 } 00:10:31.973 09:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.973 09:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70922 00:10:31.973 09:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 70922 ']' 00:10:31.973 09:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 70922 00:10:31.973 09:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:31.973 09:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:31.973 09:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70922 00:10:31.973 09:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:31.973 killing process with pid 70922 00:10:31.973 09:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:31.973 09:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70922' 00:10:31.973 09:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 70922 00:10:31.973 [2024-12-06 09:47:57.206291] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:31.973 09:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 70922 00:10:32.542 [2024-12-06 09:47:57.532437] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:33.481 09:47:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Q1oDbEq5U7 00:10:33.481 09:47:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:33.481 09:47:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:33.481 09:47:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:10:33.481 09:47:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:33.481 09:47:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:33.481 09:47:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:33.481 ************************************ 00:10:33.481 END TEST raid_read_error_test 00:10:33.481 09:47:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:10:33.481 00:10:33.481 real 0m4.785s 00:10:33.481 user 0m5.691s 00:10:33.481 sys 0m0.567s 00:10:33.481 09:47:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.481 09:47:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.481 ************************************ 00:10:33.742 09:47:58 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:33.742 09:47:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:33.742 09:47:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.742 09:47:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:33.742 ************************************ 00:10:33.742 START TEST raid_write_error_test 00:10:33.742 ************************************ 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.XHoaHSo4xT 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71068 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71068 00:10:33.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71068 ']' 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:33.742 09:47:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.742 [2024-12-06 09:47:58.902267] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:10:33.742 [2024-12-06 09:47:58.902575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71068 ] 00:10:34.003 [2024-12-06 09:47:59.095256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.003 [2024-12-06 09:47:59.211360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.263 [2024-12-06 09:47:59.410773] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:34.263 [2024-12-06 09:47:59.410924] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:34.523 09:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:34.523 09:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:34.524 09:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:34.524 09:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:34.524 09:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.524 09:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.524 BaseBdev1_malloc 00:10:34.524 09:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.524 09:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:34.524 09:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.524 09:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.524 true 00:10:34.524 09:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.524 09:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:34.524 09:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.524 09:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.784 [2024-12-06 09:47:59.795715] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:34.784 [2024-12-06 09:47:59.795856] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.784 [2024-12-06 09:47:59.795922] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:34.784 [2024-12-06 09:47:59.795966] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.784 [2024-12-06 09:47:59.798201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.784 [2024-12-06 09:47:59.798277] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:34.784 BaseBdev1 00:10:34.784 09:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.784 09:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:34.784 09:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:34.784 09:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.784 09:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.784 BaseBdev2_malloc 00:10:34.784 09:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.784 09:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:34.784 09:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.784 09:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.784 true 00:10:34.784 09:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.784 09:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:34.785 09:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.785 09:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.785 [2024-12-06 09:47:59.864477] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:34.785 [2024-12-06 09:47:59.864584] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.785 [2024-12-06 09:47:59.864624] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:34.785 [2024-12-06 09:47:59.864636] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.785 [2024-12-06 09:47:59.866930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.785 [2024-12-06 09:47:59.866970] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:34.785 BaseBdev2 00:10:34.785 09:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.785 09:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:34.785 09:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:34.785 09:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.785 09:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.785 BaseBdev3_malloc 00:10:34.785 09:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.785 09:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:34.785 09:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.785 09:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.785 true 00:10:34.785 09:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.785 09:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:34.785 09:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.785 09:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.785 [2024-12-06 09:47:59.945908] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:34.785 [2024-12-06 09:47:59.946021] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.785 [2024-12-06 09:47:59.946059] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:34.785 [2024-12-06 09:47:59.946088] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.785 [2024-12-06 09:47:59.948353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.785 [2024-12-06 09:47:59.948434] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:34.785 BaseBdev3 00:10:34.785 09:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.785 09:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:34.785 09:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:34.785 09:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.785 09:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.785 BaseBdev4_malloc 00:10:34.785 09:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.785 09:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:34.785 09:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.785 09:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.785 true 00:10:34.785 09:48:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.785 09:48:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:34.785 09:48:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.785 09:48:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.785 [2024-12-06 09:48:00.012747] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:34.785 [2024-12-06 09:48:00.012847] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.785 [2024-12-06 09:48:00.012870] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:34.785 [2024-12-06 09:48:00.012881] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.785 [2024-12-06 09:48:00.014885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.785 [2024-12-06 09:48:00.014927] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:34.785 BaseBdev4 00:10:34.785 09:48:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.785 09:48:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:34.785 09:48:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.785 09:48:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.785 [2024-12-06 09:48:00.024784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:34.785 [2024-12-06 09:48:00.026537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:34.785 [2024-12-06 09:48:00.026664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:34.785 [2024-12-06 09:48:00.026746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:34.785 [2024-12-06 09:48:00.026986] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:34.785 [2024-12-06 09:48:00.027040] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:34.785 [2024-12-06 09:48:00.027294] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:34.785 [2024-12-06 09:48:00.027494] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:34.785 [2024-12-06 09:48:00.027534] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:34.785 [2024-12-06 09:48:00.027720] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.785 09:48:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.785 09:48:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:34.785 09:48:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:34.785 09:48:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.785 09:48:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.785 09:48:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.785 09:48:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.785 09:48:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.785 09:48:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.785 09:48:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.785 09:48:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.785 09:48:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.785 09:48:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:34.785 09:48:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.786 09:48:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.045 09:48:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.045 09:48:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.045 "name": "raid_bdev1", 00:10:35.045 "uuid": "ccf1a378-7e54-4854-acae-f28b5e50d767", 00:10:35.045 "strip_size_kb": 64, 00:10:35.045 "state": "online", 00:10:35.045 "raid_level": "raid0", 00:10:35.045 "superblock": true, 00:10:35.045 "num_base_bdevs": 4, 00:10:35.045 "num_base_bdevs_discovered": 4, 00:10:35.045 "num_base_bdevs_operational": 4, 00:10:35.045 "base_bdevs_list": [ 00:10:35.045 { 00:10:35.045 "name": "BaseBdev1", 00:10:35.045 "uuid": "05506d4a-8478-5a0a-8eda-3e7aa3492dd8", 00:10:35.045 "is_configured": true, 00:10:35.045 "data_offset": 2048, 00:10:35.045 "data_size": 63488 00:10:35.045 }, 00:10:35.045 { 00:10:35.045 "name": "BaseBdev2", 00:10:35.045 "uuid": "5f2c9d2b-b638-5d21-8252-0dcc7b9f14d8", 00:10:35.045 "is_configured": true, 00:10:35.045 "data_offset": 2048, 00:10:35.045 "data_size": 63488 00:10:35.045 }, 00:10:35.045 { 00:10:35.045 "name": "BaseBdev3", 00:10:35.045 "uuid": "38da571e-632e-5287-986b-fdbeff8e8d54", 00:10:35.045 "is_configured": true, 00:10:35.045 "data_offset": 2048, 00:10:35.045 "data_size": 63488 00:10:35.045 }, 00:10:35.045 { 00:10:35.045 "name": "BaseBdev4", 00:10:35.045 "uuid": "e2226b80-cbde-5178-b7d0-67fbadaf251f", 00:10:35.045 "is_configured": true, 00:10:35.045 "data_offset": 2048, 00:10:35.045 "data_size": 63488 00:10:35.045 } 00:10:35.045 ] 00:10:35.045 }' 00:10:35.045 09:48:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.045 09:48:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.305 09:48:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:35.305 09:48:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:35.565 [2024-12-06 09:48:00.593208] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:36.504 09:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:36.504 09:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.504 09:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.504 09:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.504 09:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:36.504 09:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:36.504 09:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:36.504 09:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:36.504 09:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:36.504 09:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.504 09:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:36.504 09:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.504 09:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.504 09:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.504 09:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.504 09:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.504 09:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.504 09:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.504 09:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:36.504 09:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.504 09:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.504 09:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.504 09:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.504 "name": "raid_bdev1", 00:10:36.504 "uuid": "ccf1a378-7e54-4854-acae-f28b5e50d767", 00:10:36.504 "strip_size_kb": 64, 00:10:36.504 "state": "online", 00:10:36.504 "raid_level": "raid0", 00:10:36.504 "superblock": true, 00:10:36.504 "num_base_bdevs": 4, 00:10:36.504 "num_base_bdevs_discovered": 4, 00:10:36.504 "num_base_bdevs_operational": 4, 00:10:36.504 "base_bdevs_list": [ 00:10:36.504 { 00:10:36.504 "name": "BaseBdev1", 00:10:36.504 "uuid": "05506d4a-8478-5a0a-8eda-3e7aa3492dd8", 00:10:36.504 "is_configured": true, 00:10:36.504 "data_offset": 2048, 00:10:36.504 "data_size": 63488 00:10:36.504 }, 00:10:36.504 { 00:10:36.504 "name": "BaseBdev2", 00:10:36.504 "uuid": "5f2c9d2b-b638-5d21-8252-0dcc7b9f14d8", 00:10:36.504 "is_configured": true, 00:10:36.504 "data_offset": 2048, 00:10:36.504 "data_size": 63488 00:10:36.504 }, 00:10:36.504 { 00:10:36.504 "name": "BaseBdev3", 00:10:36.504 "uuid": "38da571e-632e-5287-986b-fdbeff8e8d54", 00:10:36.504 "is_configured": true, 00:10:36.504 "data_offset": 2048, 00:10:36.504 "data_size": 63488 00:10:36.504 }, 00:10:36.504 { 00:10:36.504 "name": "BaseBdev4", 00:10:36.504 "uuid": "e2226b80-cbde-5178-b7d0-67fbadaf251f", 00:10:36.504 "is_configured": true, 00:10:36.504 "data_offset": 2048, 00:10:36.504 "data_size": 63488 00:10:36.504 } 00:10:36.504 ] 00:10:36.504 }' 00:10:36.504 09:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.504 09:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.764 09:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:36.764 09:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.764 09:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.764 [2024-12-06 09:48:01.997717] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:36.764 [2024-12-06 09:48:01.997806] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:36.764 [2024-12-06 09:48:02.000720] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:36.764 [2024-12-06 09:48:02.000824] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.764 [2024-12-06 09:48:02.000888] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:36.764 [2024-12-06 09:48:02.000937] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:36.764 { 00:10:36.764 "results": [ 00:10:36.764 { 00:10:36.764 "job": "raid_bdev1", 00:10:36.764 "core_mask": "0x1", 00:10:36.764 "workload": "randrw", 00:10:36.764 "percentage": 50, 00:10:36.764 "status": "finished", 00:10:36.764 "queue_depth": 1, 00:10:36.764 "io_size": 131072, 00:10:36.764 "runtime": 1.405505, 00:10:36.764 "iops": 15511.862284374656, 00:10:36.764 "mibps": 1938.982785546832, 00:10:36.764 "io_failed": 1, 00:10:36.764 "io_timeout": 0, 00:10:36.764 "avg_latency_us": 89.42873043191243, 00:10:36.764 "min_latency_us": 26.382532751091702, 00:10:36.764 "max_latency_us": 1438.071615720524 00:10:36.764 } 00:10:36.764 ], 00:10:36.764 "core_count": 1 00:10:36.764 } 00:10:36.764 09:48:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.764 09:48:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71068 00:10:36.764 09:48:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71068 ']' 00:10:36.764 09:48:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71068 00:10:36.764 09:48:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:36.764 09:48:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:36.764 09:48:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71068 00:10:37.023 killing process with pid 71068 00:10:37.023 09:48:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:37.023 09:48:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:37.023 09:48:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71068' 00:10:37.023 09:48:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71068 00:10:37.023 [2024-12-06 09:48:02.046416] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:37.023 09:48:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71068 00:10:37.282 [2024-12-06 09:48:02.365778] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:38.660 09:48:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.XHoaHSo4xT 00:10:38.660 09:48:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:38.660 09:48:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:38.660 09:48:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:10:38.660 09:48:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:38.660 09:48:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:38.660 09:48:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:38.660 09:48:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:10:38.660 00:10:38.660 real 0m4.784s 00:10:38.660 user 0m5.663s 00:10:38.660 sys 0m0.591s 00:10:38.660 09:48:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.660 ************************************ 00:10:38.660 END TEST raid_write_error_test 00:10:38.660 ************************************ 00:10:38.660 09:48:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.660 09:48:03 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:38.660 09:48:03 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:38.660 09:48:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:38.660 09:48:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.660 09:48:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:38.660 ************************************ 00:10:38.660 START TEST raid_state_function_test 00:10:38.660 ************************************ 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71213 00:10:38.660 Process raid pid: 71213 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71213' 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71213 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71213 ']' 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.660 09:48:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.660 [2024-12-06 09:48:03.739941] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:10:38.660 [2024-12-06 09:48:03.740156] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:38.660 [2024-12-06 09:48:03.900789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.919 [2024-12-06 09:48:04.020364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.178 [2024-12-06 09:48:04.220567] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:39.178 [2024-12-06 09:48:04.220695] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:39.438 09:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.438 09:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:39.438 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:39.438 09:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.438 09:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.438 [2024-12-06 09:48:04.582270] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:39.438 [2024-12-06 09:48:04.582376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:39.438 [2024-12-06 09:48:04.582412] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:39.438 [2024-12-06 09:48:04.582436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:39.438 [2024-12-06 09:48:04.582461] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:39.438 [2024-12-06 09:48:04.582498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:39.438 [2024-12-06 09:48:04.582548] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:39.438 [2024-12-06 09:48:04.582570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:39.438 09:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.438 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:39.438 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.438 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.438 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.438 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.438 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.438 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.438 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.438 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.438 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.438 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.438 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.438 09:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.438 09:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.438 09:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.438 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.438 "name": "Existed_Raid", 00:10:39.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.438 "strip_size_kb": 64, 00:10:39.438 "state": "configuring", 00:10:39.438 "raid_level": "concat", 00:10:39.438 "superblock": false, 00:10:39.438 "num_base_bdevs": 4, 00:10:39.438 "num_base_bdevs_discovered": 0, 00:10:39.438 "num_base_bdevs_operational": 4, 00:10:39.438 "base_bdevs_list": [ 00:10:39.438 { 00:10:39.438 "name": "BaseBdev1", 00:10:39.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.438 "is_configured": false, 00:10:39.438 "data_offset": 0, 00:10:39.438 "data_size": 0 00:10:39.438 }, 00:10:39.438 { 00:10:39.438 "name": "BaseBdev2", 00:10:39.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.438 "is_configured": false, 00:10:39.438 "data_offset": 0, 00:10:39.438 "data_size": 0 00:10:39.438 }, 00:10:39.438 { 00:10:39.438 "name": "BaseBdev3", 00:10:39.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.438 "is_configured": false, 00:10:39.438 "data_offset": 0, 00:10:39.438 "data_size": 0 00:10:39.438 }, 00:10:39.438 { 00:10:39.438 "name": "BaseBdev4", 00:10:39.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.438 "is_configured": false, 00:10:39.438 "data_offset": 0, 00:10:39.438 "data_size": 0 00:10:39.438 } 00:10:39.438 ] 00:10:39.438 }' 00:10:39.438 09:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.438 09:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.008 [2024-12-06 09:48:05.077349] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:40.008 [2024-12-06 09:48:05.077445] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.008 [2024-12-06 09:48:05.089332] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:40.008 [2024-12-06 09:48:05.089418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:40.008 [2024-12-06 09:48:05.089463] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:40.008 [2024-12-06 09:48:05.089490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:40.008 [2024-12-06 09:48:05.089511] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:40.008 [2024-12-06 09:48:05.089535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:40.008 [2024-12-06 09:48:05.089556] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:40.008 [2024-12-06 09:48:05.089579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.008 [2024-12-06 09:48:05.137389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:40.008 BaseBdev1 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.008 [ 00:10:40.008 { 00:10:40.008 "name": "BaseBdev1", 00:10:40.008 "aliases": [ 00:10:40.008 "1f14457a-54ea-4387-a3ba-12420005593d" 00:10:40.008 ], 00:10:40.008 "product_name": "Malloc disk", 00:10:40.008 "block_size": 512, 00:10:40.008 "num_blocks": 65536, 00:10:40.008 "uuid": "1f14457a-54ea-4387-a3ba-12420005593d", 00:10:40.008 "assigned_rate_limits": { 00:10:40.008 "rw_ios_per_sec": 0, 00:10:40.008 "rw_mbytes_per_sec": 0, 00:10:40.008 "r_mbytes_per_sec": 0, 00:10:40.008 "w_mbytes_per_sec": 0 00:10:40.008 }, 00:10:40.008 "claimed": true, 00:10:40.008 "claim_type": "exclusive_write", 00:10:40.008 "zoned": false, 00:10:40.008 "supported_io_types": { 00:10:40.008 "read": true, 00:10:40.008 "write": true, 00:10:40.008 "unmap": true, 00:10:40.008 "flush": true, 00:10:40.008 "reset": true, 00:10:40.008 "nvme_admin": false, 00:10:40.008 "nvme_io": false, 00:10:40.008 "nvme_io_md": false, 00:10:40.008 "write_zeroes": true, 00:10:40.008 "zcopy": true, 00:10:40.008 "get_zone_info": false, 00:10:40.008 "zone_management": false, 00:10:40.008 "zone_append": false, 00:10:40.008 "compare": false, 00:10:40.008 "compare_and_write": false, 00:10:40.008 "abort": true, 00:10:40.008 "seek_hole": false, 00:10:40.008 "seek_data": false, 00:10:40.008 "copy": true, 00:10:40.008 "nvme_iov_md": false 00:10:40.008 }, 00:10:40.008 "memory_domains": [ 00:10:40.008 { 00:10:40.008 "dma_device_id": "system", 00:10:40.008 "dma_device_type": 1 00:10:40.008 }, 00:10:40.008 { 00:10:40.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.008 "dma_device_type": 2 00:10:40.008 } 00:10:40.008 ], 00:10:40.008 "driver_specific": {} 00:10:40.008 } 00:10:40.008 ] 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.008 "name": "Existed_Raid", 00:10:40.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.008 "strip_size_kb": 64, 00:10:40.008 "state": "configuring", 00:10:40.008 "raid_level": "concat", 00:10:40.008 "superblock": false, 00:10:40.008 "num_base_bdevs": 4, 00:10:40.008 "num_base_bdevs_discovered": 1, 00:10:40.008 "num_base_bdevs_operational": 4, 00:10:40.008 "base_bdevs_list": [ 00:10:40.008 { 00:10:40.008 "name": "BaseBdev1", 00:10:40.008 "uuid": "1f14457a-54ea-4387-a3ba-12420005593d", 00:10:40.008 "is_configured": true, 00:10:40.008 "data_offset": 0, 00:10:40.008 "data_size": 65536 00:10:40.008 }, 00:10:40.008 { 00:10:40.008 "name": "BaseBdev2", 00:10:40.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.008 "is_configured": false, 00:10:40.008 "data_offset": 0, 00:10:40.008 "data_size": 0 00:10:40.008 }, 00:10:40.008 { 00:10:40.008 "name": "BaseBdev3", 00:10:40.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.008 "is_configured": false, 00:10:40.008 "data_offset": 0, 00:10:40.008 "data_size": 0 00:10:40.008 }, 00:10:40.008 { 00:10:40.008 "name": "BaseBdev4", 00:10:40.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.008 "is_configured": false, 00:10:40.008 "data_offset": 0, 00:10:40.008 "data_size": 0 00:10:40.008 } 00:10:40.008 ] 00:10:40.008 }' 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.008 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.575 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:40.575 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.575 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.575 [2024-12-06 09:48:05.620611] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:40.575 [2024-12-06 09:48:05.620726] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:40.575 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.575 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:40.575 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.575 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.575 [2024-12-06 09:48:05.632640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:40.575 [2024-12-06 09:48:05.634610] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:40.575 [2024-12-06 09:48:05.634686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:40.575 [2024-12-06 09:48:05.634722] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:40.575 [2024-12-06 09:48:05.634766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:40.575 [2024-12-06 09:48:05.634798] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:40.575 [2024-12-06 09:48:05.634823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:40.575 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.575 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:40.575 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:40.575 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:40.575 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.575 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.575 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.575 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.575 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.575 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.575 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.575 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.575 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.575 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.575 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.575 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.575 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.575 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.575 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.575 "name": "Existed_Raid", 00:10:40.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.575 "strip_size_kb": 64, 00:10:40.575 "state": "configuring", 00:10:40.575 "raid_level": "concat", 00:10:40.575 "superblock": false, 00:10:40.575 "num_base_bdevs": 4, 00:10:40.575 "num_base_bdevs_discovered": 1, 00:10:40.575 "num_base_bdevs_operational": 4, 00:10:40.575 "base_bdevs_list": [ 00:10:40.575 { 00:10:40.575 "name": "BaseBdev1", 00:10:40.575 "uuid": "1f14457a-54ea-4387-a3ba-12420005593d", 00:10:40.575 "is_configured": true, 00:10:40.575 "data_offset": 0, 00:10:40.576 "data_size": 65536 00:10:40.576 }, 00:10:40.576 { 00:10:40.576 "name": "BaseBdev2", 00:10:40.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.576 "is_configured": false, 00:10:40.576 "data_offset": 0, 00:10:40.576 "data_size": 0 00:10:40.576 }, 00:10:40.576 { 00:10:40.576 "name": "BaseBdev3", 00:10:40.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.576 "is_configured": false, 00:10:40.576 "data_offset": 0, 00:10:40.576 "data_size": 0 00:10:40.576 }, 00:10:40.576 { 00:10:40.576 "name": "BaseBdev4", 00:10:40.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.576 "is_configured": false, 00:10:40.576 "data_offset": 0, 00:10:40.576 "data_size": 0 00:10:40.576 } 00:10:40.576 ] 00:10:40.576 }' 00:10:40.576 09:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.576 09:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.835 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:40.835 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.835 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.835 [2024-12-06 09:48:06.075018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:40.835 BaseBdev2 00:10:40.835 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.835 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:40.835 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:40.835 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:40.835 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:40.835 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:40.835 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:40.835 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:40.835 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.835 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.835 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.835 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:40.835 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.835 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.835 [ 00:10:40.835 { 00:10:40.835 "name": "BaseBdev2", 00:10:40.835 "aliases": [ 00:10:40.835 "15fb547b-cf06-4b75-8523-55ecd6462439" 00:10:40.835 ], 00:10:40.835 "product_name": "Malloc disk", 00:10:40.835 "block_size": 512, 00:10:41.094 "num_blocks": 65536, 00:10:41.094 "uuid": "15fb547b-cf06-4b75-8523-55ecd6462439", 00:10:41.094 "assigned_rate_limits": { 00:10:41.094 "rw_ios_per_sec": 0, 00:10:41.094 "rw_mbytes_per_sec": 0, 00:10:41.094 "r_mbytes_per_sec": 0, 00:10:41.094 "w_mbytes_per_sec": 0 00:10:41.094 }, 00:10:41.094 "claimed": true, 00:10:41.094 "claim_type": "exclusive_write", 00:10:41.094 "zoned": false, 00:10:41.094 "supported_io_types": { 00:10:41.094 "read": true, 00:10:41.094 "write": true, 00:10:41.094 "unmap": true, 00:10:41.094 "flush": true, 00:10:41.094 "reset": true, 00:10:41.094 "nvme_admin": false, 00:10:41.094 "nvme_io": false, 00:10:41.094 "nvme_io_md": false, 00:10:41.094 "write_zeroes": true, 00:10:41.094 "zcopy": true, 00:10:41.094 "get_zone_info": false, 00:10:41.094 "zone_management": false, 00:10:41.094 "zone_append": false, 00:10:41.094 "compare": false, 00:10:41.094 "compare_and_write": false, 00:10:41.094 "abort": true, 00:10:41.094 "seek_hole": false, 00:10:41.094 "seek_data": false, 00:10:41.094 "copy": true, 00:10:41.094 "nvme_iov_md": false 00:10:41.094 }, 00:10:41.094 "memory_domains": [ 00:10:41.094 { 00:10:41.094 "dma_device_id": "system", 00:10:41.094 "dma_device_type": 1 00:10:41.094 }, 00:10:41.094 { 00:10:41.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.094 "dma_device_type": 2 00:10:41.094 } 00:10:41.094 ], 00:10:41.094 "driver_specific": {} 00:10:41.094 } 00:10:41.094 ] 00:10:41.094 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.094 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:41.094 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:41.094 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:41.094 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:41.094 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.094 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.094 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.094 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.094 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.094 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.094 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.094 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.094 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.094 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.094 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.094 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.094 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.094 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.094 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.094 "name": "Existed_Raid", 00:10:41.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.094 "strip_size_kb": 64, 00:10:41.094 "state": "configuring", 00:10:41.094 "raid_level": "concat", 00:10:41.094 "superblock": false, 00:10:41.094 "num_base_bdevs": 4, 00:10:41.094 "num_base_bdevs_discovered": 2, 00:10:41.094 "num_base_bdevs_operational": 4, 00:10:41.094 "base_bdevs_list": [ 00:10:41.094 { 00:10:41.094 "name": "BaseBdev1", 00:10:41.094 "uuid": "1f14457a-54ea-4387-a3ba-12420005593d", 00:10:41.094 "is_configured": true, 00:10:41.094 "data_offset": 0, 00:10:41.094 "data_size": 65536 00:10:41.094 }, 00:10:41.094 { 00:10:41.094 "name": "BaseBdev2", 00:10:41.094 "uuid": "15fb547b-cf06-4b75-8523-55ecd6462439", 00:10:41.094 "is_configured": true, 00:10:41.094 "data_offset": 0, 00:10:41.094 "data_size": 65536 00:10:41.094 }, 00:10:41.094 { 00:10:41.094 "name": "BaseBdev3", 00:10:41.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.094 "is_configured": false, 00:10:41.094 "data_offset": 0, 00:10:41.094 "data_size": 0 00:10:41.094 }, 00:10:41.094 { 00:10:41.094 "name": "BaseBdev4", 00:10:41.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.094 "is_configured": false, 00:10:41.094 "data_offset": 0, 00:10:41.094 "data_size": 0 00:10:41.094 } 00:10:41.094 ] 00:10:41.094 }' 00:10:41.094 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.094 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.355 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:41.355 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.355 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.355 [2024-12-06 09:48:06.579618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:41.355 BaseBdev3 00:10:41.355 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.355 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:41.355 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:41.355 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.355 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:41.355 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.355 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.355 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.355 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.355 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.355 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.355 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:41.355 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.355 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.355 [ 00:10:41.355 { 00:10:41.355 "name": "BaseBdev3", 00:10:41.355 "aliases": [ 00:10:41.355 "3fb1b8fc-63ee-4bfb-9914-4b0d21d5f924" 00:10:41.355 ], 00:10:41.355 "product_name": "Malloc disk", 00:10:41.355 "block_size": 512, 00:10:41.355 "num_blocks": 65536, 00:10:41.355 "uuid": "3fb1b8fc-63ee-4bfb-9914-4b0d21d5f924", 00:10:41.355 "assigned_rate_limits": { 00:10:41.355 "rw_ios_per_sec": 0, 00:10:41.355 "rw_mbytes_per_sec": 0, 00:10:41.355 "r_mbytes_per_sec": 0, 00:10:41.355 "w_mbytes_per_sec": 0 00:10:41.355 }, 00:10:41.355 "claimed": true, 00:10:41.355 "claim_type": "exclusive_write", 00:10:41.355 "zoned": false, 00:10:41.355 "supported_io_types": { 00:10:41.355 "read": true, 00:10:41.355 "write": true, 00:10:41.355 "unmap": true, 00:10:41.355 "flush": true, 00:10:41.355 "reset": true, 00:10:41.355 "nvme_admin": false, 00:10:41.355 "nvme_io": false, 00:10:41.355 "nvme_io_md": false, 00:10:41.355 "write_zeroes": true, 00:10:41.355 "zcopy": true, 00:10:41.355 "get_zone_info": false, 00:10:41.355 "zone_management": false, 00:10:41.355 "zone_append": false, 00:10:41.355 "compare": false, 00:10:41.355 "compare_and_write": false, 00:10:41.355 "abort": true, 00:10:41.355 "seek_hole": false, 00:10:41.355 "seek_data": false, 00:10:41.355 "copy": true, 00:10:41.355 "nvme_iov_md": false 00:10:41.355 }, 00:10:41.355 "memory_domains": [ 00:10:41.355 { 00:10:41.355 "dma_device_id": "system", 00:10:41.355 "dma_device_type": 1 00:10:41.355 }, 00:10:41.355 { 00:10:41.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.355 "dma_device_type": 2 00:10:41.355 } 00:10:41.355 ], 00:10:41.355 "driver_specific": {} 00:10:41.355 } 00:10:41.355 ] 00:10:41.355 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.355 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:41.355 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:41.355 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:41.355 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:41.355 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.355 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.355 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.355 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.355 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.355 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.355 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.355 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.355 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.355 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.355 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.355 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.355 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.615 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.615 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.615 "name": "Existed_Raid", 00:10:41.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.615 "strip_size_kb": 64, 00:10:41.615 "state": "configuring", 00:10:41.615 "raid_level": "concat", 00:10:41.615 "superblock": false, 00:10:41.615 "num_base_bdevs": 4, 00:10:41.615 "num_base_bdevs_discovered": 3, 00:10:41.615 "num_base_bdevs_operational": 4, 00:10:41.615 "base_bdevs_list": [ 00:10:41.615 { 00:10:41.615 "name": "BaseBdev1", 00:10:41.615 "uuid": "1f14457a-54ea-4387-a3ba-12420005593d", 00:10:41.615 "is_configured": true, 00:10:41.615 "data_offset": 0, 00:10:41.615 "data_size": 65536 00:10:41.615 }, 00:10:41.615 { 00:10:41.615 "name": "BaseBdev2", 00:10:41.615 "uuid": "15fb547b-cf06-4b75-8523-55ecd6462439", 00:10:41.615 "is_configured": true, 00:10:41.615 "data_offset": 0, 00:10:41.615 "data_size": 65536 00:10:41.615 }, 00:10:41.615 { 00:10:41.615 "name": "BaseBdev3", 00:10:41.615 "uuid": "3fb1b8fc-63ee-4bfb-9914-4b0d21d5f924", 00:10:41.615 "is_configured": true, 00:10:41.615 "data_offset": 0, 00:10:41.615 "data_size": 65536 00:10:41.615 }, 00:10:41.615 { 00:10:41.615 "name": "BaseBdev4", 00:10:41.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.615 "is_configured": false, 00:10:41.615 "data_offset": 0, 00:10:41.615 "data_size": 0 00:10:41.616 } 00:10:41.616 ] 00:10:41.616 }' 00:10:41.616 09:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.616 09:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.875 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:41.875 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.875 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.875 [2024-12-06 09:48:07.101932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:41.875 [2024-12-06 09:48:07.102065] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:41.875 [2024-12-06 09:48:07.102090] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:41.875 [2024-12-06 09:48:07.102418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:41.875 [2024-12-06 09:48:07.102635] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:41.875 [2024-12-06 09:48:07.102680] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:41.875 [2024-12-06 09:48:07.102977] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:41.875 BaseBdev4 00:10:41.875 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.875 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:41.876 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:41.876 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.876 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:41.876 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.876 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.876 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.876 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.876 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.876 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.876 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:41.876 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.876 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.876 [ 00:10:41.876 { 00:10:41.876 "name": "BaseBdev4", 00:10:41.876 "aliases": [ 00:10:41.876 "31e3691b-1f73-412e-9fa3-2f1d38416a71" 00:10:41.876 ], 00:10:41.876 "product_name": "Malloc disk", 00:10:41.876 "block_size": 512, 00:10:41.876 "num_blocks": 65536, 00:10:41.876 "uuid": "31e3691b-1f73-412e-9fa3-2f1d38416a71", 00:10:41.876 "assigned_rate_limits": { 00:10:41.876 "rw_ios_per_sec": 0, 00:10:41.876 "rw_mbytes_per_sec": 0, 00:10:41.876 "r_mbytes_per_sec": 0, 00:10:41.876 "w_mbytes_per_sec": 0 00:10:41.876 }, 00:10:41.876 "claimed": true, 00:10:41.876 "claim_type": "exclusive_write", 00:10:41.876 "zoned": false, 00:10:41.876 "supported_io_types": { 00:10:41.876 "read": true, 00:10:41.876 "write": true, 00:10:41.876 "unmap": true, 00:10:41.876 "flush": true, 00:10:41.876 "reset": true, 00:10:41.876 "nvme_admin": false, 00:10:41.876 "nvme_io": false, 00:10:41.876 "nvme_io_md": false, 00:10:41.876 "write_zeroes": true, 00:10:41.876 "zcopy": true, 00:10:41.876 "get_zone_info": false, 00:10:41.876 "zone_management": false, 00:10:41.876 "zone_append": false, 00:10:41.876 "compare": false, 00:10:41.876 "compare_and_write": false, 00:10:41.876 "abort": true, 00:10:41.876 "seek_hole": false, 00:10:41.876 "seek_data": false, 00:10:41.876 "copy": true, 00:10:41.876 "nvme_iov_md": false 00:10:41.876 }, 00:10:41.876 "memory_domains": [ 00:10:41.876 { 00:10:41.876 "dma_device_id": "system", 00:10:41.876 "dma_device_type": 1 00:10:41.876 }, 00:10:41.876 { 00:10:41.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.876 "dma_device_type": 2 00:10:41.876 } 00:10:41.876 ], 00:10:41.876 "driver_specific": {} 00:10:41.876 } 00:10:41.876 ] 00:10:41.876 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.876 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:41.876 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:41.876 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:41.876 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:41.876 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.876 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:41.876 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.876 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.876 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.876 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.876 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.876 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.876 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.876 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.876 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.876 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.876 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.135 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.135 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.135 "name": "Existed_Raid", 00:10:42.135 "uuid": "a79cdcdc-33b5-4aae-883b-d00673fdc1fe", 00:10:42.135 "strip_size_kb": 64, 00:10:42.135 "state": "online", 00:10:42.135 "raid_level": "concat", 00:10:42.135 "superblock": false, 00:10:42.135 "num_base_bdevs": 4, 00:10:42.135 "num_base_bdevs_discovered": 4, 00:10:42.135 "num_base_bdevs_operational": 4, 00:10:42.135 "base_bdevs_list": [ 00:10:42.135 { 00:10:42.135 "name": "BaseBdev1", 00:10:42.135 "uuid": "1f14457a-54ea-4387-a3ba-12420005593d", 00:10:42.135 "is_configured": true, 00:10:42.135 "data_offset": 0, 00:10:42.135 "data_size": 65536 00:10:42.135 }, 00:10:42.135 { 00:10:42.135 "name": "BaseBdev2", 00:10:42.135 "uuid": "15fb547b-cf06-4b75-8523-55ecd6462439", 00:10:42.135 "is_configured": true, 00:10:42.135 "data_offset": 0, 00:10:42.135 "data_size": 65536 00:10:42.135 }, 00:10:42.135 { 00:10:42.135 "name": "BaseBdev3", 00:10:42.135 "uuid": "3fb1b8fc-63ee-4bfb-9914-4b0d21d5f924", 00:10:42.135 "is_configured": true, 00:10:42.135 "data_offset": 0, 00:10:42.135 "data_size": 65536 00:10:42.135 }, 00:10:42.135 { 00:10:42.135 "name": "BaseBdev4", 00:10:42.135 "uuid": "31e3691b-1f73-412e-9fa3-2f1d38416a71", 00:10:42.135 "is_configured": true, 00:10:42.135 "data_offset": 0, 00:10:42.135 "data_size": 65536 00:10:42.135 } 00:10:42.135 ] 00:10:42.135 }' 00:10:42.135 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.135 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.394 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:42.394 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:42.394 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:42.394 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:42.394 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:42.394 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:42.394 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:42.394 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:42.394 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.394 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.394 [2024-12-06 09:48:07.577594] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:42.394 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.394 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:42.394 "name": "Existed_Raid", 00:10:42.394 "aliases": [ 00:10:42.394 "a79cdcdc-33b5-4aae-883b-d00673fdc1fe" 00:10:42.394 ], 00:10:42.394 "product_name": "Raid Volume", 00:10:42.394 "block_size": 512, 00:10:42.394 "num_blocks": 262144, 00:10:42.394 "uuid": "a79cdcdc-33b5-4aae-883b-d00673fdc1fe", 00:10:42.394 "assigned_rate_limits": { 00:10:42.394 "rw_ios_per_sec": 0, 00:10:42.394 "rw_mbytes_per_sec": 0, 00:10:42.394 "r_mbytes_per_sec": 0, 00:10:42.394 "w_mbytes_per_sec": 0 00:10:42.394 }, 00:10:42.394 "claimed": false, 00:10:42.394 "zoned": false, 00:10:42.394 "supported_io_types": { 00:10:42.394 "read": true, 00:10:42.394 "write": true, 00:10:42.394 "unmap": true, 00:10:42.394 "flush": true, 00:10:42.394 "reset": true, 00:10:42.394 "nvme_admin": false, 00:10:42.394 "nvme_io": false, 00:10:42.394 "nvme_io_md": false, 00:10:42.394 "write_zeroes": true, 00:10:42.394 "zcopy": false, 00:10:42.394 "get_zone_info": false, 00:10:42.394 "zone_management": false, 00:10:42.394 "zone_append": false, 00:10:42.394 "compare": false, 00:10:42.394 "compare_and_write": false, 00:10:42.394 "abort": false, 00:10:42.394 "seek_hole": false, 00:10:42.394 "seek_data": false, 00:10:42.394 "copy": false, 00:10:42.394 "nvme_iov_md": false 00:10:42.394 }, 00:10:42.394 "memory_domains": [ 00:10:42.394 { 00:10:42.394 "dma_device_id": "system", 00:10:42.394 "dma_device_type": 1 00:10:42.394 }, 00:10:42.394 { 00:10:42.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.394 "dma_device_type": 2 00:10:42.394 }, 00:10:42.394 { 00:10:42.394 "dma_device_id": "system", 00:10:42.394 "dma_device_type": 1 00:10:42.394 }, 00:10:42.394 { 00:10:42.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.394 "dma_device_type": 2 00:10:42.394 }, 00:10:42.394 { 00:10:42.394 "dma_device_id": "system", 00:10:42.394 "dma_device_type": 1 00:10:42.394 }, 00:10:42.394 { 00:10:42.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.394 "dma_device_type": 2 00:10:42.395 }, 00:10:42.395 { 00:10:42.395 "dma_device_id": "system", 00:10:42.395 "dma_device_type": 1 00:10:42.395 }, 00:10:42.395 { 00:10:42.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.395 "dma_device_type": 2 00:10:42.395 } 00:10:42.395 ], 00:10:42.395 "driver_specific": { 00:10:42.395 "raid": { 00:10:42.395 "uuid": "a79cdcdc-33b5-4aae-883b-d00673fdc1fe", 00:10:42.395 "strip_size_kb": 64, 00:10:42.395 "state": "online", 00:10:42.395 "raid_level": "concat", 00:10:42.395 "superblock": false, 00:10:42.395 "num_base_bdevs": 4, 00:10:42.395 "num_base_bdevs_discovered": 4, 00:10:42.395 "num_base_bdevs_operational": 4, 00:10:42.395 "base_bdevs_list": [ 00:10:42.395 { 00:10:42.395 "name": "BaseBdev1", 00:10:42.395 "uuid": "1f14457a-54ea-4387-a3ba-12420005593d", 00:10:42.395 "is_configured": true, 00:10:42.395 "data_offset": 0, 00:10:42.395 "data_size": 65536 00:10:42.395 }, 00:10:42.395 { 00:10:42.395 "name": "BaseBdev2", 00:10:42.395 "uuid": "15fb547b-cf06-4b75-8523-55ecd6462439", 00:10:42.395 "is_configured": true, 00:10:42.395 "data_offset": 0, 00:10:42.395 "data_size": 65536 00:10:42.395 }, 00:10:42.395 { 00:10:42.395 "name": "BaseBdev3", 00:10:42.395 "uuid": "3fb1b8fc-63ee-4bfb-9914-4b0d21d5f924", 00:10:42.395 "is_configured": true, 00:10:42.395 "data_offset": 0, 00:10:42.395 "data_size": 65536 00:10:42.395 }, 00:10:42.395 { 00:10:42.395 "name": "BaseBdev4", 00:10:42.395 "uuid": "31e3691b-1f73-412e-9fa3-2f1d38416a71", 00:10:42.395 "is_configured": true, 00:10:42.395 "data_offset": 0, 00:10:42.395 "data_size": 65536 00:10:42.395 } 00:10:42.395 ] 00:10:42.395 } 00:10:42.395 } 00:10:42.395 }' 00:10:42.395 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:42.395 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:42.395 BaseBdev2 00:10:42.395 BaseBdev3 00:10:42.395 BaseBdev4' 00:10:42.395 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.654 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.654 [2024-12-06 09:48:07.892715] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:42.654 [2024-12-06 09:48:07.892790] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:42.654 [2024-12-06 09:48:07.892863] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:42.914 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.914 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:42.914 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:42.914 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:42.914 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:42.914 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:42.914 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:42.914 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.914 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:42.914 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.914 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.914 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.914 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.914 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.914 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.914 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.914 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.914 09:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.914 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.914 09:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.914 09:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.914 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.914 "name": "Existed_Raid", 00:10:42.914 "uuid": "a79cdcdc-33b5-4aae-883b-d00673fdc1fe", 00:10:42.914 "strip_size_kb": 64, 00:10:42.914 "state": "offline", 00:10:42.914 "raid_level": "concat", 00:10:42.914 "superblock": false, 00:10:42.914 "num_base_bdevs": 4, 00:10:42.914 "num_base_bdevs_discovered": 3, 00:10:42.914 "num_base_bdevs_operational": 3, 00:10:42.914 "base_bdevs_list": [ 00:10:42.914 { 00:10:42.914 "name": null, 00:10:42.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.914 "is_configured": false, 00:10:42.914 "data_offset": 0, 00:10:42.914 "data_size": 65536 00:10:42.914 }, 00:10:42.914 { 00:10:42.914 "name": "BaseBdev2", 00:10:42.914 "uuid": "15fb547b-cf06-4b75-8523-55ecd6462439", 00:10:42.914 "is_configured": true, 00:10:42.914 "data_offset": 0, 00:10:42.914 "data_size": 65536 00:10:42.914 }, 00:10:42.914 { 00:10:42.914 "name": "BaseBdev3", 00:10:42.914 "uuid": "3fb1b8fc-63ee-4bfb-9914-4b0d21d5f924", 00:10:42.914 "is_configured": true, 00:10:42.914 "data_offset": 0, 00:10:42.914 "data_size": 65536 00:10:42.914 }, 00:10:42.914 { 00:10:42.914 "name": "BaseBdev4", 00:10:42.914 "uuid": "31e3691b-1f73-412e-9fa3-2f1d38416a71", 00:10:42.914 "is_configured": true, 00:10:42.914 "data_offset": 0, 00:10:42.914 "data_size": 65536 00:10:42.914 } 00:10:42.914 ] 00:10:42.914 }' 00:10:42.914 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.914 09:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.483 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:43.483 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:43.483 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.483 09:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.483 09:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.483 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:43.483 09:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.483 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:43.483 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:43.483 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:43.483 09:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.483 09:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.483 [2024-12-06 09:48:08.506782] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:43.483 09:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.483 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:43.483 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:43.483 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.483 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:43.483 09:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.483 09:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.483 09:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.483 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:43.483 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:43.483 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:43.483 09:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.483 09:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.483 [2024-12-06 09:48:08.662194] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:43.742 09:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.742 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:43.742 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:43.742 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.742 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:43.742 09:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.742 09:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.742 09:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.742 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:43.742 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:43.742 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:43.742 09:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.742 09:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.742 [2024-12-06 09:48:08.813037] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:43.742 [2024-12-06 09:48:08.813129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:43.742 09:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.742 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:43.742 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:43.742 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.742 09:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.742 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:43.742 09:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.742 09:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.742 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:43.742 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:43.742 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:43.742 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:43.742 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:43.742 09:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:43.742 09:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.742 09:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.742 BaseBdev2 00:10:43.742 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.742 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:43.742 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:43.742 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:43.742 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:43.742 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:43.742 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:43.742 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:43.742 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.742 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.000 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.000 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:44.000 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.000 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.000 [ 00:10:44.000 { 00:10:44.000 "name": "BaseBdev2", 00:10:44.000 "aliases": [ 00:10:44.000 "46bab700-3287-49c4-9fba-91cbc3093c5c" 00:10:44.000 ], 00:10:44.000 "product_name": "Malloc disk", 00:10:44.000 "block_size": 512, 00:10:44.000 "num_blocks": 65536, 00:10:44.000 "uuid": "46bab700-3287-49c4-9fba-91cbc3093c5c", 00:10:44.000 "assigned_rate_limits": { 00:10:44.000 "rw_ios_per_sec": 0, 00:10:44.000 "rw_mbytes_per_sec": 0, 00:10:44.000 "r_mbytes_per_sec": 0, 00:10:44.000 "w_mbytes_per_sec": 0 00:10:44.000 }, 00:10:44.000 "claimed": false, 00:10:44.000 "zoned": false, 00:10:44.000 "supported_io_types": { 00:10:44.000 "read": true, 00:10:44.000 "write": true, 00:10:44.000 "unmap": true, 00:10:44.000 "flush": true, 00:10:44.000 "reset": true, 00:10:44.000 "nvme_admin": false, 00:10:44.000 "nvme_io": false, 00:10:44.000 "nvme_io_md": false, 00:10:44.000 "write_zeroes": true, 00:10:44.000 "zcopy": true, 00:10:44.000 "get_zone_info": false, 00:10:44.000 "zone_management": false, 00:10:44.000 "zone_append": false, 00:10:44.000 "compare": false, 00:10:44.000 "compare_and_write": false, 00:10:44.000 "abort": true, 00:10:44.000 "seek_hole": false, 00:10:44.000 "seek_data": false, 00:10:44.000 "copy": true, 00:10:44.000 "nvme_iov_md": false 00:10:44.000 }, 00:10:44.000 "memory_domains": [ 00:10:44.000 { 00:10:44.000 "dma_device_id": "system", 00:10:44.000 "dma_device_type": 1 00:10:44.000 }, 00:10:44.000 { 00:10:44.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.000 "dma_device_type": 2 00:10:44.000 } 00:10:44.000 ], 00:10:44.000 "driver_specific": {} 00:10:44.000 } 00:10:44.000 ] 00:10:44.000 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.000 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:44.000 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.001 BaseBdev3 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.001 [ 00:10:44.001 { 00:10:44.001 "name": "BaseBdev3", 00:10:44.001 "aliases": [ 00:10:44.001 "d0748373-32f2-4bac-9321-c7a25ce6e97d" 00:10:44.001 ], 00:10:44.001 "product_name": "Malloc disk", 00:10:44.001 "block_size": 512, 00:10:44.001 "num_blocks": 65536, 00:10:44.001 "uuid": "d0748373-32f2-4bac-9321-c7a25ce6e97d", 00:10:44.001 "assigned_rate_limits": { 00:10:44.001 "rw_ios_per_sec": 0, 00:10:44.001 "rw_mbytes_per_sec": 0, 00:10:44.001 "r_mbytes_per_sec": 0, 00:10:44.001 "w_mbytes_per_sec": 0 00:10:44.001 }, 00:10:44.001 "claimed": false, 00:10:44.001 "zoned": false, 00:10:44.001 "supported_io_types": { 00:10:44.001 "read": true, 00:10:44.001 "write": true, 00:10:44.001 "unmap": true, 00:10:44.001 "flush": true, 00:10:44.001 "reset": true, 00:10:44.001 "nvme_admin": false, 00:10:44.001 "nvme_io": false, 00:10:44.001 "nvme_io_md": false, 00:10:44.001 "write_zeroes": true, 00:10:44.001 "zcopy": true, 00:10:44.001 "get_zone_info": false, 00:10:44.001 "zone_management": false, 00:10:44.001 "zone_append": false, 00:10:44.001 "compare": false, 00:10:44.001 "compare_and_write": false, 00:10:44.001 "abort": true, 00:10:44.001 "seek_hole": false, 00:10:44.001 "seek_data": false, 00:10:44.001 "copy": true, 00:10:44.001 "nvme_iov_md": false 00:10:44.001 }, 00:10:44.001 "memory_domains": [ 00:10:44.001 { 00:10:44.001 "dma_device_id": "system", 00:10:44.001 "dma_device_type": 1 00:10:44.001 }, 00:10:44.001 { 00:10:44.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.001 "dma_device_type": 2 00:10:44.001 } 00:10:44.001 ], 00:10:44.001 "driver_specific": {} 00:10:44.001 } 00:10:44.001 ] 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.001 BaseBdev4 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.001 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.001 [ 00:10:44.001 { 00:10:44.001 "name": "BaseBdev4", 00:10:44.001 "aliases": [ 00:10:44.001 "df3948ba-f70e-4895-8236-df462705c3e3" 00:10:44.001 ], 00:10:44.001 "product_name": "Malloc disk", 00:10:44.001 "block_size": 512, 00:10:44.001 "num_blocks": 65536, 00:10:44.001 "uuid": "df3948ba-f70e-4895-8236-df462705c3e3", 00:10:44.001 "assigned_rate_limits": { 00:10:44.001 "rw_ios_per_sec": 0, 00:10:44.001 "rw_mbytes_per_sec": 0, 00:10:44.001 "r_mbytes_per_sec": 0, 00:10:44.001 "w_mbytes_per_sec": 0 00:10:44.001 }, 00:10:44.001 "claimed": false, 00:10:44.001 "zoned": false, 00:10:44.001 "supported_io_types": { 00:10:44.001 "read": true, 00:10:44.001 "write": true, 00:10:44.001 "unmap": true, 00:10:44.001 "flush": true, 00:10:44.001 "reset": true, 00:10:44.001 "nvme_admin": false, 00:10:44.001 "nvme_io": false, 00:10:44.001 "nvme_io_md": false, 00:10:44.001 "write_zeroes": true, 00:10:44.001 "zcopy": true, 00:10:44.001 "get_zone_info": false, 00:10:44.001 "zone_management": false, 00:10:44.001 "zone_append": false, 00:10:44.001 "compare": false, 00:10:44.001 "compare_and_write": false, 00:10:44.001 "abort": true, 00:10:44.001 "seek_hole": false, 00:10:44.001 "seek_data": false, 00:10:44.001 "copy": true, 00:10:44.001 "nvme_iov_md": false 00:10:44.001 }, 00:10:44.002 "memory_domains": [ 00:10:44.002 { 00:10:44.002 "dma_device_id": "system", 00:10:44.002 "dma_device_type": 1 00:10:44.002 }, 00:10:44.002 { 00:10:44.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.002 "dma_device_type": 2 00:10:44.002 } 00:10:44.002 ], 00:10:44.002 "driver_specific": {} 00:10:44.002 } 00:10:44.002 ] 00:10:44.002 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.002 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:44.002 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:44.002 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:44.002 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:44.002 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.002 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.002 [2024-12-06 09:48:09.201073] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:44.002 [2024-12-06 09:48:09.201171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:44.002 [2024-12-06 09:48:09.201214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:44.002 [2024-12-06 09:48:09.202992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:44.002 [2024-12-06 09:48:09.203080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:44.002 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.002 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:44.002 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.002 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.002 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.002 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.002 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.002 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.002 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.002 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.002 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.002 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.002 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.002 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.002 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.002 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.002 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.002 "name": "Existed_Raid", 00:10:44.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.002 "strip_size_kb": 64, 00:10:44.002 "state": "configuring", 00:10:44.002 "raid_level": "concat", 00:10:44.002 "superblock": false, 00:10:44.002 "num_base_bdevs": 4, 00:10:44.002 "num_base_bdevs_discovered": 3, 00:10:44.002 "num_base_bdevs_operational": 4, 00:10:44.002 "base_bdevs_list": [ 00:10:44.002 { 00:10:44.002 "name": "BaseBdev1", 00:10:44.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.002 "is_configured": false, 00:10:44.002 "data_offset": 0, 00:10:44.002 "data_size": 0 00:10:44.002 }, 00:10:44.002 { 00:10:44.002 "name": "BaseBdev2", 00:10:44.002 "uuid": "46bab700-3287-49c4-9fba-91cbc3093c5c", 00:10:44.002 "is_configured": true, 00:10:44.002 "data_offset": 0, 00:10:44.002 "data_size": 65536 00:10:44.002 }, 00:10:44.002 { 00:10:44.002 "name": "BaseBdev3", 00:10:44.002 "uuid": "d0748373-32f2-4bac-9321-c7a25ce6e97d", 00:10:44.002 "is_configured": true, 00:10:44.002 "data_offset": 0, 00:10:44.002 "data_size": 65536 00:10:44.002 }, 00:10:44.002 { 00:10:44.002 "name": "BaseBdev4", 00:10:44.002 "uuid": "df3948ba-f70e-4895-8236-df462705c3e3", 00:10:44.002 "is_configured": true, 00:10:44.002 "data_offset": 0, 00:10:44.002 "data_size": 65536 00:10:44.002 } 00:10:44.002 ] 00:10:44.002 }' 00:10:44.002 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.002 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.570 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:44.570 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.570 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.570 [2024-12-06 09:48:09.692270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:44.570 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.570 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:44.570 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.570 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.570 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.570 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.570 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.570 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.570 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.570 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.570 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.570 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.570 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.570 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.570 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.570 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.570 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.570 "name": "Existed_Raid", 00:10:44.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.570 "strip_size_kb": 64, 00:10:44.570 "state": "configuring", 00:10:44.570 "raid_level": "concat", 00:10:44.570 "superblock": false, 00:10:44.570 "num_base_bdevs": 4, 00:10:44.570 "num_base_bdevs_discovered": 2, 00:10:44.570 "num_base_bdevs_operational": 4, 00:10:44.570 "base_bdevs_list": [ 00:10:44.570 { 00:10:44.570 "name": "BaseBdev1", 00:10:44.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.570 "is_configured": false, 00:10:44.570 "data_offset": 0, 00:10:44.570 "data_size": 0 00:10:44.570 }, 00:10:44.570 { 00:10:44.570 "name": null, 00:10:44.570 "uuid": "46bab700-3287-49c4-9fba-91cbc3093c5c", 00:10:44.570 "is_configured": false, 00:10:44.570 "data_offset": 0, 00:10:44.570 "data_size": 65536 00:10:44.570 }, 00:10:44.570 { 00:10:44.570 "name": "BaseBdev3", 00:10:44.570 "uuid": "d0748373-32f2-4bac-9321-c7a25ce6e97d", 00:10:44.570 "is_configured": true, 00:10:44.570 "data_offset": 0, 00:10:44.570 "data_size": 65536 00:10:44.570 }, 00:10:44.570 { 00:10:44.570 "name": "BaseBdev4", 00:10:44.570 "uuid": "df3948ba-f70e-4895-8236-df462705c3e3", 00:10:44.570 "is_configured": true, 00:10:44.570 "data_offset": 0, 00:10:44.570 "data_size": 65536 00:10:44.570 } 00:10:44.570 ] 00:10:44.570 }' 00:10:44.570 09:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.570 09:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.139 [2024-12-06 09:48:10.184697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:45.139 BaseBdev1 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.139 [ 00:10:45.139 { 00:10:45.139 "name": "BaseBdev1", 00:10:45.139 "aliases": [ 00:10:45.139 "3b5a4420-0550-4db7-ade4-99b41930bab6" 00:10:45.139 ], 00:10:45.139 "product_name": "Malloc disk", 00:10:45.139 "block_size": 512, 00:10:45.139 "num_blocks": 65536, 00:10:45.139 "uuid": "3b5a4420-0550-4db7-ade4-99b41930bab6", 00:10:45.139 "assigned_rate_limits": { 00:10:45.139 "rw_ios_per_sec": 0, 00:10:45.139 "rw_mbytes_per_sec": 0, 00:10:45.139 "r_mbytes_per_sec": 0, 00:10:45.139 "w_mbytes_per_sec": 0 00:10:45.139 }, 00:10:45.139 "claimed": true, 00:10:45.139 "claim_type": "exclusive_write", 00:10:45.139 "zoned": false, 00:10:45.139 "supported_io_types": { 00:10:45.139 "read": true, 00:10:45.139 "write": true, 00:10:45.139 "unmap": true, 00:10:45.139 "flush": true, 00:10:45.139 "reset": true, 00:10:45.139 "nvme_admin": false, 00:10:45.139 "nvme_io": false, 00:10:45.139 "nvme_io_md": false, 00:10:45.139 "write_zeroes": true, 00:10:45.139 "zcopy": true, 00:10:45.139 "get_zone_info": false, 00:10:45.139 "zone_management": false, 00:10:45.139 "zone_append": false, 00:10:45.139 "compare": false, 00:10:45.139 "compare_and_write": false, 00:10:45.139 "abort": true, 00:10:45.139 "seek_hole": false, 00:10:45.139 "seek_data": false, 00:10:45.139 "copy": true, 00:10:45.139 "nvme_iov_md": false 00:10:45.139 }, 00:10:45.139 "memory_domains": [ 00:10:45.139 { 00:10:45.139 "dma_device_id": "system", 00:10:45.139 "dma_device_type": 1 00:10:45.139 }, 00:10:45.139 { 00:10:45.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.139 "dma_device_type": 2 00:10:45.139 } 00:10:45.139 ], 00:10:45.139 "driver_specific": {} 00:10:45.139 } 00:10:45.139 ] 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.139 "name": "Existed_Raid", 00:10:45.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.139 "strip_size_kb": 64, 00:10:45.139 "state": "configuring", 00:10:45.139 "raid_level": "concat", 00:10:45.139 "superblock": false, 00:10:45.139 "num_base_bdevs": 4, 00:10:45.139 "num_base_bdevs_discovered": 3, 00:10:45.139 "num_base_bdevs_operational": 4, 00:10:45.139 "base_bdevs_list": [ 00:10:45.139 { 00:10:45.139 "name": "BaseBdev1", 00:10:45.139 "uuid": "3b5a4420-0550-4db7-ade4-99b41930bab6", 00:10:45.139 "is_configured": true, 00:10:45.139 "data_offset": 0, 00:10:45.139 "data_size": 65536 00:10:45.139 }, 00:10:45.139 { 00:10:45.139 "name": null, 00:10:45.139 "uuid": "46bab700-3287-49c4-9fba-91cbc3093c5c", 00:10:45.139 "is_configured": false, 00:10:45.139 "data_offset": 0, 00:10:45.139 "data_size": 65536 00:10:45.139 }, 00:10:45.139 { 00:10:45.139 "name": "BaseBdev3", 00:10:45.139 "uuid": "d0748373-32f2-4bac-9321-c7a25ce6e97d", 00:10:45.139 "is_configured": true, 00:10:45.139 "data_offset": 0, 00:10:45.139 "data_size": 65536 00:10:45.139 }, 00:10:45.139 { 00:10:45.139 "name": "BaseBdev4", 00:10:45.139 "uuid": "df3948ba-f70e-4895-8236-df462705c3e3", 00:10:45.139 "is_configured": true, 00:10:45.139 "data_offset": 0, 00:10:45.139 "data_size": 65536 00:10:45.139 } 00:10:45.139 ] 00:10:45.139 }' 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.139 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.707 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.707 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:45.707 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.707 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.707 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.707 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:45.707 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:45.707 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.707 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.707 [2024-12-06 09:48:10.755863] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:45.707 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.707 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:45.707 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.707 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.707 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.707 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.707 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.707 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.707 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.707 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.707 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.707 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.707 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.707 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.707 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.707 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.707 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.707 "name": "Existed_Raid", 00:10:45.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.707 "strip_size_kb": 64, 00:10:45.707 "state": "configuring", 00:10:45.707 "raid_level": "concat", 00:10:45.707 "superblock": false, 00:10:45.707 "num_base_bdevs": 4, 00:10:45.707 "num_base_bdevs_discovered": 2, 00:10:45.707 "num_base_bdevs_operational": 4, 00:10:45.707 "base_bdevs_list": [ 00:10:45.707 { 00:10:45.707 "name": "BaseBdev1", 00:10:45.707 "uuid": "3b5a4420-0550-4db7-ade4-99b41930bab6", 00:10:45.707 "is_configured": true, 00:10:45.707 "data_offset": 0, 00:10:45.707 "data_size": 65536 00:10:45.707 }, 00:10:45.707 { 00:10:45.707 "name": null, 00:10:45.707 "uuid": "46bab700-3287-49c4-9fba-91cbc3093c5c", 00:10:45.707 "is_configured": false, 00:10:45.707 "data_offset": 0, 00:10:45.707 "data_size": 65536 00:10:45.707 }, 00:10:45.707 { 00:10:45.707 "name": null, 00:10:45.707 "uuid": "d0748373-32f2-4bac-9321-c7a25ce6e97d", 00:10:45.707 "is_configured": false, 00:10:45.707 "data_offset": 0, 00:10:45.707 "data_size": 65536 00:10:45.707 }, 00:10:45.707 { 00:10:45.707 "name": "BaseBdev4", 00:10:45.707 "uuid": "df3948ba-f70e-4895-8236-df462705c3e3", 00:10:45.707 "is_configured": true, 00:10:45.707 "data_offset": 0, 00:10:45.707 "data_size": 65536 00:10:45.707 } 00:10:45.707 ] 00:10:45.707 }' 00:10:45.707 09:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.707 09:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.967 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.967 09:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.967 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:45.967 09:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.967 09:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.967 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:45.967 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:45.967 09:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.967 09:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.967 [2024-12-06 09:48:11.231021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:45.967 09:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.967 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:45.967 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.967 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.967 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.967 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.226 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.226 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.226 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.226 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.226 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.226 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.226 09:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.226 09:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.226 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.226 09:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.226 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.226 "name": "Existed_Raid", 00:10:46.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.226 "strip_size_kb": 64, 00:10:46.226 "state": "configuring", 00:10:46.226 "raid_level": "concat", 00:10:46.226 "superblock": false, 00:10:46.226 "num_base_bdevs": 4, 00:10:46.226 "num_base_bdevs_discovered": 3, 00:10:46.226 "num_base_bdevs_operational": 4, 00:10:46.226 "base_bdevs_list": [ 00:10:46.226 { 00:10:46.226 "name": "BaseBdev1", 00:10:46.226 "uuid": "3b5a4420-0550-4db7-ade4-99b41930bab6", 00:10:46.226 "is_configured": true, 00:10:46.226 "data_offset": 0, 00:10:46.226 "data_size": 65536 00:10:46.226 }, 00:10:46.226 { 00:10:46.226 "name": null, 00:10:46.226 "uuid": "46bab700-3287-49c4-9fba-91cbc3093c5c", 00:10:46.226 "is_configured": false, 00:10:46.226 "data_offset": 0, 00:10:46.226 "data_size": 65536 00:10:46.226 }, 00:10:46.226 { 00:10:46.226 "name": "BaseBdev3", 00:10:46.226 "uuid": "d0748373-32f2-4bac-9321-c7a25ce6e97d", 00:10:46.226 "is_configured": true, 00:10:46.226 "data_offset": 0, 00:10:46.226 "data_size": 65536 00:10:46.226 }, 00:10:46.226 { 00:10:46.226 "name": "BaseBdev4", 00:10:46.226 "uuid": "df3948ba-f70e-4895-8236-df462705c3e3", 00:10:46.226 "is_configured": true, 00:10:46.226 "data_offset": 0, 00:10:46.226 "data_size": 65536 00:10:46.226 } 00:10:46.226 ] 00:10:46.226 }' 00:10:46.226 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.226 09:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.486 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.486 09:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.486 09:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.486 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:46.486 09:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.486 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:46.486 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:46.486 09:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.486 09:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.486 [2024-12-06 09:48:11.738200] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:46.744 09:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.744 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:46.744 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.744 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.744 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.744 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.744 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.744 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.744 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.744 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.744 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.744 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.744 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.745 09:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.745 09:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.745 09:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.745 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.745 "name": "Existed_Raid", 00:10:46.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.745 "strip_size_kb": 64, 00:10:46.745 "state": "configuring", 00:10:46.745 "raid_level": "concat", 00:10:46.745 "superblock": false, 00:10:46.745 "num_base_bdevs": 4, 00:10:46.745 "num_base_bdevs_discovered": 2, 00:10:46.745 "num_base_bdevs_operational": 4, 00:10:46.745 "base_bdevs_list": [ 00:10:46.745 { 00:10:46.745 "name": null, 00:10:46.745 "uuid": "3b5a4420-0550-4db7-ade4-99b41930bab6", 00:10:46.745 "is_configured": false, 00:10:46.745 "data_offset": 0, 00:10:46.745 "data_size": 65536 00:10:46.745 }, 00:10:46.745 { 00:10:46.745 "name": null, 00:10:46.745 "uuid": "46bab700-3287-49c4-9fba-91cbc3093c5c", 00:10:46.745 "is_configured": false, 00:10:46.745 "data_offset": 0, 00:10:46.745 "data_size": 65536 00:10:46.745 }, 00:10:46.745 { 00:10:46.745 "name": "BaseBdev3", 00:10:46.745 "uuid": "d0748373-32f2-4bac-9321-c7a25ce6e97d", 00:10:46.745 "is_configured": true, 00:10:46.745 "data_offset": 0, 00:10:46.745 "data_size": 65536 00:10:46.745 }, 00:10:46.745 { 00:10:46.745 "name": "BaseBdev4", 00:10:46.745 "uuid": "df3948ba-f70e-4895-8236-df462705c3e3", 00:10:46.745 "is_configured": true, 00:10:46.745 "data_offset": 0, 00:10:46.745 "data_size": 65536 00:10:46.745 } 00:10:46.745 ] 00:10:46.745 }' 00:10:46.745 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.745 09:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.314 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.314 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.314 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.314 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:47.314 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.314 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:47.314 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:47.314 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.314 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.314 [2024-12-06 09:48:12.352591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:47.314 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.314 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:47.314 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.314 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.314 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.314 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.314 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.314 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.314 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.314 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.314 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.314 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.314 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.314 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.314 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.314 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.314 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.314 "name": "Existed_Raid", 00:10:47.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.314 "strip_size_kb": 64, 00:10:47.314 "state": "configuring", 00:10:47.314 "raid_level": "concat", 00:10:47.314 "superblock": false, 00:10:47.314 "num_base_bdevs": 4, 00:10:47.314 "num_base_bdevs_discovered": 3, 00:10:47.314 "num_base_bdevs_operational": 4, 00:10:47.314 "base_bdevs_list": [ 00:10:47.314 { 00:10:47.314 "name": null, 00:10:47.314 "uuid": "3b5a4420-0550-4db7-ade4-99b41930bab6", 00:10:47.314 "is_configured": false, 00:10:47.314 "data_offset": 0, 00:10:47.314 "data_size": 65536 00:10:47.314 }, 00:10:47.314 { 00:10:47.314 "name": "BaseBdev2", 00:10:47.315 "uuid": "46bab700-3287-49c4-9fba-91cbc3093c5c", 00:10:47.315 "is_configured": true, 00:10:47.315 "data_offset": 0, 00:10:47.315 "data_size": 65536 00:10:47.315 }, 00:10:47.315 { 00:10:47.315 "name": "BaseBdev3", 00:10:47.315 "uuid": "d0748373-32f2-4bac-9321-c7a25ce6e97d", 00:10:47.315 "is_configured": true, 00:10:47.315 "data_offset": 0, 00:10:47.315 "data_size": 65536 00:10:47.315 }, 00:10:47.315 { 00:10:47.315 "name": "BaseBdev4", 00:10:47.315 "uuid": "df3948ba-f70e-4895-8236-df462705c3e3", 00:10:47.315 "is_configured": true, 00:10:47.315 "data_offset": 0, 00:10:47.315 "data_size": 65536 00:10:47.315 } 00:10:47.315 ] 00:10:47.315 }' 00:10:47.315 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.315 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.574 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.574 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.574 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.574 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:47.574 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.833 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:47.833 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:47.833 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.833 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.833 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.833 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.833 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3b5a4420-0550-4db7-ade4-99b41930bab6 00:10:47.833 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.833 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.833 [2024-12-06 09:48:12.946705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:47.833 [2024-12-06 09:48:12.946833] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:47.834 [2024-12-06 09:48:12.946859] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:47.834 [2024-12-06 09:48:12.947177] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:47.834 [2024-12-06 09:48:12.947403] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:47.834 [2024-12-06 09:48:12.947451] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:47.834 NewBaseBdev 00:10:47.834 [2024-12-06 09:48:12.947768] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.834 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.834 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:47.834 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:47.834 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:47.834 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:47.834 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:47.834 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:47.834 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:47.834 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.834 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.834 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.834 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:47.834 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.834 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.834 [ 00:10:47.834 { 00:10:47.834 "name": "NewBaseBdev", 00:10:47.834 "aliases": [ 00:10:47.834 "3b5a4420-0550-4db7-ade4-99b41930bab6" 00:10:47.834 ], 00:10:47.834 "product_name": "Malloc disk", 00:10:47.834 "block_size": 512, 00:10:47.834 "num_blocks": 65536, 00:10:47.834 "uuid": "3b5a4420-0550-4db7-ade4-99b41930bab6", 00:10:47.834 "assigned_rate_limits": { 00:10:47.834 "rw_ios_per_sec": 0, 00:10:47.834 "rw_mbytes_per_sec": 0, 00:10:47.834 "r_mbytes_per_sec": 0, 00:10:47.834 "w_mbytes_per_sec": 0 00:10:47.834 }, 00:10:47.834 "claimed": true, 00:10:47.834 "claim_type": "exclusive_write", 00:10:47.834 "zoned": false, 00:10:47.834 "supported_io_types": { 00:10:47.834 "read": true, 00:10:47.834 "write": true, 00:10:47.834 "unmap": true, 00:10:47.834 "flush": true, 00:10:47.834 "reset": true, 00:10:47.834 "nvme_admin": false, 00:10:47.834 "nvme_io": false, 00:10:47.834 "nvme_io_md": false, 00:10:47.834 "write_zeroes": true, 00:10:47.834 "zcopy": true, 00:10:47.834 "get_zone_info": false, 00:10:47.834 "zone_management": false, 00:10:47.834 "zone_append": false, 00:10:47.834 "compare": false, 00:10:47.834 "compare_and_write": false, 00:10:47.834 "abort": true, 00:10:47.834 "seek_hole": false, 00:10:47.834 "seek_data": false, 00:10:47.834 "copy": true, 00:10:47.834 "nvme_iov_md": false 00:10:47.834 }, 00:10:47.834 "memory_domains": [ 00:10:47.834 { 00:10:47.834 "dma_device_id": "system", 00:10:47.834 "dma_device_type": 1 00:10:47.834 }, 00:10:47.834 { 00:10:47.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.834 "dma_device_type": 2 00:10:47.834 } 00:10:47.834 ], 00:10:47.834 "driver_specific": {} 00:10:47.834 } 00:10:47.834 ] 00:10:47.834 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.834 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:47.834 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:47.834 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.834 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:47.834 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.834 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.834 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.834 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.834 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.834 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.834 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.834 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.834 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.834 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.834 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.834 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.834 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.834 "name": "Existed_Raid", 00:10:47.834 "uuid": "4e6927ee-427f-4e34-aa35-6928ca8a21cd", 00:10:47.834 "strip_size_kb": 64, 00:10:47.834 "state": "online", 00:10:47.834 "raid_level": "concat", 00:10:47.834 "superblock": false, 00:10:47.834 "num_base_bdevs": 4, 00:10:47.834 "num_base_bdevs_discovered": 4, 00:10:47.834 "num_base_bdevs_operational": 4, 00:10:47.834 "base_bdevs_list": [ 00:10:47.834 { 00:10:47.834 "name": "NewBaseBdev", 00:10:47.834 "uuid": "3b5a4420-0550-4db7-ade4-99b41930bab6", 00:10:47.834 "is_configured": true, 00:10:47.834 "data_offset": 0, 00:10:47.834 "data_size": 65536 00:10:47.834 }, 00:10:47.834 { 00:10:47.834 "name": "BaseBdev2", 00:10:47.834 "uuid": "46bab700-3287-49c4-9fba-91cbc3093c5c", 00:10:47.834 "is_configured": true, 00:10:47.834 "data_offset": 0, 00:10:47.834 "data_size": 65536 00:10:47.834 }, 00:10:47.834 { 00:10:47.834 "name": "BaseBdev3", 00:10:47.834 "uuid": "d0748373-32f2-4bac-9321-c7a25ce6e97d", 00:10:47.834 "is_configured": true, 00:10:47.834 "data_offset": 0, 00:10:47.834 "data_size": 65536 00:10:47.834 }, 00:10:47.834 { 00:10:47.834 "name": "BaseBdev4", 00:10:47.834 "uuid": "df3948ba-f70e-4895-8236-df462705c3e3", 00:10:47.834 "is_configured": true, 00:10:47.834 "data_offset": 0, 00:10:47.834 "data_size": 65536 00:10:47.834 } 00:10:47.834 ] 00:10:47.834 }' 00:10:47.834 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.834 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.404 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:48.404 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:48.404 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:48.404 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:48.404 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:48.404 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:48.404 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:48.404 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.404 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.404 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:48.404 [2024-12-06 09:48:13.442290] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:48.404 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.404 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:48.404 "name": "Existed_Raid", 00:10:48.404 "aliases": [ 00:10:48.404 "4e6927ee-427f-4e34-aa35-6928ca8a21cd" 00:10:48.404 ], 00:10:48.404 "product_name": "Raid Volume", 00:10:48.404 "block_size": 512, 00:10:48.404 "num_blocks": 262144, 00:10:48.404 "uuid": "4e6927ee-427f-4e34-aa35-6928ca8a21cd", 00:10:48.404 "assigned_rate_limits": { 00:10:48.404 "rw_ios_per_sec": 0, 00:10:48.404 "rw_mbytes_per_sec": 0, 00:10:48.404 "r_mbytes_per_sec": 0, 00:10:48.404 "w_mbytes_per_sec": 0 00:10:48.404 }, 00:10:48.404 "claimed": false, 00:10:48.404 "zoned": false, 00:10:48.404 "supported_io_types": { 00:10:48.404 "read": true, 00:10:48.404 "write": true, 00:10:48.404 "unmap": true, 00:10:48.404 "flush": true, 00:10:48.404 "reset": true, 00:10:48.404 "nvme_admin": false, 00:10:48.404 "nvme_io": false, 00:10:48.404 "nvme_io_md": false, 00:10:48.404 "write_zeroes": true, 00:10:48.404 "zcopy": false, 00:10:48.404 "get_zone_info": false, 00:10:48.404 "zone_management": false, 00:10:48.404 "zone_append": false, 00:10:48.404 "compare": false, 00:10:48.404 "compare_and_write": false, 00:10:48.404 "abort": false, 00:10:48.404 "seek_hole": false, 00:10:48.404 "seek_data": false, 00:10:48.404 "copy": false, 00:10:48.404 "nvme_iov_md": false 00:10:48.404 }, 00:10:48.404 "memory_domains": [ 00:10:48.404 { 00:10:48.404 "dma_device_id": "system", 00:10:48.404 "dma_device_type": 1 00:10:48.404 }, 00:10:48.404 { 00:10:48.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.404 "dma_device_type": 2 00:10:48.404 }, 00:10:48.404 { 00:10:48.404 "dma_device_id": "system", 00:10:48.404 "dma_device_type": 1 00:10:48.404 }, 00:10:48.404 { 00:10:48.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.404 "dma_device_type": 2 00:10:48.404 }, 00:10:48.404 { 00:10:48.404 "dma_device_id": "system", 00:10:48.404 "dma_device_type": 1 00:10:48.404 }, 00:10:48.404 { 00:10:48.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.404 "dma_device_type": 2 00:10:48.404 }, 00:10:48.404 { 00:10:48.404 "dma_device_id": "system", 00:10:48.404 "dma_device_type": 1 00:10:48.404 }, 00:10:48.404 { 00:10:48.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.404 "dma_device_type": 2 00:10:48.404 } 00:10:48.404 ], 00:10:48.404 "driver_specific": { 00:10:48.404 "raid": { 00:10:48.404 "uuid": "4e6927ee-427f-4e34-aa35-6928ca8a21cd", 00:10:48.404 "strip_size_kb": 64, 00:10:48.404 "state": "online", 00:10:48.404 "raid_level": "concat", 00:10:48.404 "superblock": false, 00:10:48.404 "num_base_bdevs": 4, 00:10:48.404 "num_base_bdevs_discovered": 4, 00:10:48.404 "num_base_bdevs_operational": 4, 00:10:48.404 "base_bdevs_list": [ 00:10:48.404 { 00:10:48.404 "name": "NewBaseBdev", 00:10:48.404 "uuid": "3b5a4420-0550-4db7-ade4-99b41930bab6", 00:10:48.404 "is_configured": true, 00:10:48.404 "data_offset": 0, 00:10:48.404 "data_size": 65536 00:10:48.404 }, 00:10:48.404 { 00:10:48.404 "name": "BaseBdev2", 00:10:48.404 "uuid": "46bab700-3287-49c4-9fba-91cbc3093c5c", 00:10:48.404 "is_configured": true, 00:10:48.404 "data_offset": 0, 00:10:48.404 "data_size": 65536 00:10:48.404 }, 00:10:48.404 { 00:10:48.404 "name": "BaseBdev3", 00:10:48.404 "uuid": "d0748373-32f2-4bac-9321-c7a25ce6e97d", 00:10:48.404 "is_configured": true, 00:10:48.404 "data_offset": 0, 00:10:48.404 "data_size": 65536 00:10:48.404 }, 00:10:48.404 { 00:10:48.404 "name": "BaseBdev4", 00:10:48.404 "uuid": "df3948ba-f70e-4895-8236-df462705c3e3", 00:10:48.404 "is_configured": true, 00:10:48.404 "data_offset": 0, 00:10:48.404 "data_size": 65536 00:10:48.404 } 00:10:48.404 ] 00:10:48.404 } 00:10:48.404 } 00:10:48.404 }' 00:10:48.404 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:48.404 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:48.405 BaseBdev2 00:10:48.405 BaseBdev3 00:10:48.405 BaseBdev4' 00:10:48.405 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.405 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:48.405 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.405 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:48.405 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.405 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.405 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.405 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.405 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.405 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.405 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.405 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:48.405 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.405 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.405 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.405 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.405 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.405 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.405 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.405 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:48.405 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.405 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.405 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.665 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.665 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.665 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.665 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.665 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:48.665 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.665 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.665 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.665 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.665 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.665 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.665 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:48.665 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.665 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.665 [2024-12-06 09:48:13.773352] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:48.665 [2024-12-06 09:48:13.773385] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:48.665 [2024-12-06 09:48:13.773477] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:48.665 [2024-12-06 09:48:13.773547] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:48.665 [2024-12-06 09:48:13.773556] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:48.665 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.665 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71213 00:10:48.665 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71213 ']' 00:10:48.665 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71213 00:10:48.665 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:48.665 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:48.665 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71213 00:10:48.665 killing process with pid 71213 00:10:48.665 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:48.665 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:48.665 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71213' 00:10:48.665 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71213 00:10:48.665 [2024-12-06 09:48:13.814690] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:48.665 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71213 00:10:49.234 [2024-12-06 09:48:14.207285] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:50.173 00:10:50.173 real 0m11.691s 00:10:50.173 user 0m18.650s 00:10:50.173 sys 0m2.043s 00:10:50.173 ************************************ 00:10:50.173 END TEST raid_state_function_test 00:10:50.173 ************************************ 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.173 09:48:15 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:10:50.173 09:48:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:50.173 09:48:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.173 09:48:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:50.173 ************************************ 00:10:50.173 START TEST raid_state_function_test_sb 00:10:50.173 ************************************ 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71884 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71884' 00:10:50.173 Process raid pid: 71884 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71884 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71884 ']' 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.173 09:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.432 [2024-12-06 09:48:15.500833] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:10:50.432 [2024-12-06 09:48:15.500949] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.432 [2024-12-06 09:48:15.675128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.691 [2024-12-06 09:48:15.790346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.949 [2024-12-06 09:48:15.997469] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:50.949 [2024-12-06 09:48:15.997511] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.208 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:51.208 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:51.208 09:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:51.208 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.208 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.208 [2024-12-06 09:48:16.355023] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:51.208 [2024-12-06 09:48:16.355130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:51.208 [2024-12-06 09:48:16.355183] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:51.208 [2024-12-06 09:48:16.355225] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:51.208 [2024-12-06 09:48:16.355254] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:51.208 [2024-12-06 09:48:16.355278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:51.208 [2024-12-06 09:48:16.355344] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:51.208 [2024-12-06 09:48:16.355368] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:51.208 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.208 09:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:51.208 09:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.208 09:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.208 09:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:51.208 09:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.208 09:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.208 09:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.208 09:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.208 09:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.208 09:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.208 09:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.208 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.208 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.208 09:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.208 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.208 09:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.208 "name": "Existed_Raid", 00:10:51.208 "uuid": "6dad5cd1-7577-499c-a344-c111a292d5d0", 00:10:51.208 "strip_size_kb": 64, 00:10:51.208 "state": "configuring", 00:10:51.208 "raid_level": "concat", 00:10:51.208 "superblock": true, 00:10:51.208 "num_base_bdevs": 4, 00:10:51.208 "num_base_bdevs_discovered": 0, 00:10:51.208 "num_base_bdevs_operational": 4, 00:10:51.208 "base_bdevs_list": [ 00:10:51.208 { 00:10:51.208 "name": "BaseBdev1", 00:10:51.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.208 "is_configured": false, 00:10:51.208 "data_offset": 0, 00:10:51.208 "data_size": 0 00:10:51.208 }, 00:10:51.208 { 00:10:51.208 "name": "BaseBdev2", 00:10:51.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.208 "is_configured": false, 00:10:51.208 "data_offset": 0, 00:10:51.208 "data_size": 0 00:10:51.208 }, 00:10:51.208 { 00:10:51.208 "name": "BaseBdev3", 00:10:51.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.208 "is_configured": false, 00:10:51.208 "data_offset": 0, 00:10:51.208 "data_size": 0 00:10:51.208 }, 00:10:51.208 { 00:10:51.208 "name": "BaseBdev4", 00:10:51.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.208 "is_configured": false, 00:10:51.208 "data_offset": 0, 00:10:51.208 "data_size": 0 00:10:51.208 } 00:10:51.208 ] 00:10:51.208 }' 00:10:51.208 09:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.208 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.778 [2024-12-06 09:48:16.810179] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:51.778 [2024-12-06 09:48:16.810259] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.778 [2024-12-06 09:48:16.822174] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:51.778 [2024-12-06 09:48:16.822271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:51.778 [2024-12-06 09:48:16.822299] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:51.778 [2024-12-06 09:48:16.822323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:51.778 [2024-12-06 09:48:16.822342] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:51.778 [2024-12-06 09:48:16.822363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:51.778 [2024-12-06 09:48:16.822381] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:51.778 [2024-12-06 09:48:16.822402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.778 [2024-12-06 09:48:16.870633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:51.778 BaseBdev1 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.778 [ 00:10:51.778 { 00:10:51.778 "name": "BaseBdev1", 00:10:51.778 "aliases": [ 00:10:51.778 "afb52f4b-07a6-4adf-9392-3337efe7e2ee" 00:10:51.778 ], 00:10:51.778 "product_name": "Malloc disk", 00:10:51.778 "block_size": 512, 00:10:51.778 "num_blocks": 65536, 00:10:51.778 "uuid": "afb52f4b-07a6-4adf-9392-3337efe7e2ee", 00:10:51.778 "assigned_rate_limits": { 00:10:51.778 "rw_ios_per_sec": 0, 00:10:51.778 "rw_mbytes_per_sec": 0, 00:10:51.778 "r_mbytes_per_sec": 0, 00:10:51.778 "w_mbytes_per_sec": 0 00:10:51.778 }, 00:10:51.778 "claimed": true, 00:10:51.778 "claim_type": "exclusive_write", 00:10:51.778 "zoned": false, 00:10:51.778 "supported_io_types": { 00:10:51.778 "read": true, 00:10:51.778 "write": true, 00:10:51.778 "unmap": true, 00:10:51.778 "flush": true, 00:10:51.778 "reset": true, 00:10:51.778 "nvme_admin": false, 00:10:51.778 "nvme_io": false, 00:10:51.778 "nvme_io_md": false, 00:10:51.778 "write_zeroes": true, 00:10:51.778 "zcopy": true, 00:10:51.778 "get_zone_info": false, 00:10:51.778 "zone_management": false, 00:10:51.778 "zone_append": false, 00:10:51.778 "compare": false, 00:10:51.778 "compare_and_write": false, 00:10:51.778 "abort": true, 00:10:51.778 "seek_hole": false, 00:10:51.778 "seek_data": false, 00:10:51.778 "copy": true, 00:10:51.778 "nvme_iov_md": false 00:10:51.778 }, 00:10:51.778 "memory_domains": [ 00:10:51.778 { 00:10:51.778 "dma_device_id": "system", 00:10:51.778 "dma_device_type": 1 00:10:51.778 }, 00:10:51.778 { 00:10:51.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.778 "dma_device_type": 2 00:10:51.778 } 00:10:51.778 ], 00:10:51.778 "driver_specific": {} 00:10:51.778 } 00:10:51.778 ] 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.778 09:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.778 "name": "Existed_Raid", 00:10:51.778 "uuid": "5e35494a-bacd-4219-bb33-6ce87536dd12", 00:10:51.778 "strip_size_kb": 64, 00:10:51.778 "state": "configuring", 00:10:51.778 "raid_level": "concat", 00:10:51.778 "superblock": true, 00:10:51.778 "num_base_bdevs": 4, 00:10:51.778 "num_base_bdevs_discovered": 1, 00:10:51.778 "num_base_bdevs_operational": 4, 00:10:51.778 "base_bdevs_list": [ 00:10:51.778 { 00:10:51.778 "name": "BaseBdev1", 00:10:51.778 "uuid": "afb52f4b-07a6-4adf-9392-3337efe7e2ee", 00:10:51.778 "is_configured": true, 00:10:51.778 "data_offset": 2048, 00:10:51.778 "data_size": 63488 00:10:51.778 }, 00:10:51.778 { 00:10:51.778 "name": "BaseBdev2", 00:10:51.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.778 "is_configured": false, 00:10:51.778 "data_offset": 0, 00:10:51.778 "data_size": 0 00:10:51.778 }, 00:10:51.778 { 00:10:51.778 "name": "BaseBdev3", 00:10:51.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.778 "is_configured": false, 00:10:51.778 "data_offset": 0, 00:10:51.778 "data_size": 0 00:10:51.778 }, 00:10:51.778 { 00:10:51.778 "name": "BaseBdev4", 00:10:51.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.778 "is_configured": false, 00:10:51.778 "data_offset": 0, 00:10:51.778 "data_size": 0 00:10:51.779 } 00:10:51.779 ] 00:10:51.779 }' 00:10:51.779 09:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.779 09:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.348 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:52.348 09:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.348 09:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.348 [2024-12-06 09:48:17.401765] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:52.348 [2024-12-06 09:48:17.401871] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:52.348 09:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.348 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:52.348 09:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.348 09:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.348 [2024-12-06 09:48:17.409835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:52.348 [2024-12-06 09:48:17.411855] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:52.348 [2024-12-06 09:48:17.411934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:52.348 [2024-12-06 09:48:17.411969] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:52.348 [2024-12-06 09:48:17.412015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:52.348 [2024-12-06 09:48:17.412047] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:52.348 [2024-12-06 09:48:17.412073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:52.348 09:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.348 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:52.348 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:52.348 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:52.348 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.348 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.348 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.348 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.348 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.348 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.348 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.348 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.348 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.348 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.348 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.348 09:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.348 09:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.349 09:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.349 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.349 "name": "Existed_Raid", 00:10:52.349 "uuid": "2beb83eb-56e2-4f04-be27-f3746a8d45dc", 00:10:52.349 "strip_size_kb": 64, 00:10:52.349 "state": "configuring", 00:10:52.349 "raid_level": "concat", 00:10:52.349 "superblock": true, 00:10:52.349 "num_base_bdevs": 4, 00:10:52.349 "num_base_bdevs_discovered": 1, 00:10:52.349 "num_base_bdevs_operational": 4, 00:10:52.349 "base_bdevs_list": [ 00:10:52.349 { 00:10:52.349 "name": "BaseBdev1", 00:10:52.349 "uuid": "afb52f4b-07a6-4adf-9392-3337efe7e2ee", 00:10:52.349 "is_configured": true, 00:10:52.349 "data_offset": 2048, 00:10:52.349 "data_size": 63488 00:10:52.349 }, 00:10:52.349 { 00:10:52.349 "name": "BaseBdev2", 00:10:52.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.349 "is_configured": false, 00:10:52.349 "data_offset": 0, 00:10:52.349 "data_size": 0 00:10:52.349 }, 00:10:52.349 { 00:10:52.349 "name": "BaseBdev3", 00:10:52.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.349 "is_configured": false, 00:10:52.349 "data_offset": 0, 00:10:52.349 "data_size": 0 00:10:52.349 }, 00:10:52.349 { 00:10:52.349 "name": "BaseBdev4", 00:10:52.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.349 "is_configured": false, 00:10:52.349 "data_offset": 0, 00:10:52.349 "data_size": 0 00:10:52.349 } 00:10:52.349 ] 00:10:52.349 }' 00:10:52.349 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.349 09:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.608 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:52.609 09:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.609 09:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.868 [2024-12-06 09:48:17.894575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:52.868 BaseBdev2 00:10:52.868 09:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.868 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:52.868 09:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:52.868 09:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:52.868 09:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:52.868 09:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:52.868 09:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:52.868 09:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:52.868 09:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.868 09:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.868 09:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.868 09:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:52.868 09:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.868 09:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.868 [ 00:10:52.868 { 00:10:52.868 "name": "BaseBdev2", 00:10:52.869 "aliases": [ 00:10:52.869 "62508f06-eec9-402a-8925-569b87893ba7" 00:10:52.869 ], 00:10:52.869 "product_name": "Malloc disk", 00:10:52.869 "block_size": 512, 00:10:52.869 "num_blocks": 65536, 00:10:52.869 "uuid": "62508f06-eec9-402a-8925-569b87893ba7", 00:10:52.869 "assigned_rate_limits": { 00:10:52.869 "rw_ios_per_sec": 0, 00:10:52.869 "rw_mbytes_per_sec": 0, 00:10:52.869 "r_mbytes_per_sec": 0, 00:10:52.869 "w_mbytes_per_sec": 0 00:10:52.869 }, 00:10:52.869 "claimed": true, 00:10:52.869 "claim_type": "exclusive_write", 00:10:52.869 "zoned": false, 00:10:52.869 "supported_io_types": { 00:10:52.869 "read": true, 00:10:52.869 "write": true, 00:10:52.869 "unmap": true, 00:10:52.869 "flush": true, 00:10:52.869 "reset": true, 00:10:52.869 "nvme_admin": false, 00:10:52.869 "nvme_io": false, 00:10:52.869 "nvme_io_md": false, 00:10:52.869 "write_zeroes": true, 00:10:52.869 "zcopy": true, 00:10:52.869 "get_zone_info": false, 00:10:52.869 "zone_management": false, 00:10:52.869 "zone_append": false, 00:10:52.869 "compare": false, 00:10:52.869 "compare_and_write": false, 00:10:52.869 "abort": true, 00:10:52.869 "seek_hole": false, 00:10:52.869 "seek_data": false, 00:10:52.869 "copy": true, 00:10:52.869 "nvme_iov_md": false 00:10:52.869 }, 00:10:52.869 "memory_domains": [ 00:10:52.869 { 00:10:52.869 "dma_device_id": "system", 00:10:52.869 "dma_device_type": 1 00:10:52.869 }, 00:10:52.869 { 00:10:52.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.869 "dma_device_type": 2 00:10:52.869 } 00:10:52.869 ], 00:10:52.869 "driver_specific": {} 00:10:52.869 } 00:10:52.869 ] 00:10:52.869 09:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.869 09:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:52.869 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:52.869 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:52.869 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:52.869 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.869 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.869 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.869 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.869 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.869 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.869 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.869 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.869 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.869 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.869 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.869 09:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.869 09:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.869 09:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.869 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.869 "name": "Existed_Raid", 00:10:52.869 "uuid": "2beb83eb-56e2-4f04-be27-f3746a8d45dc", 00:10:52.869 "strip_size_kb": 64, 00:10:52.869 "state": "configuring", 00:10:52.869 "raid_level": "concat", 00:10:52.869 "superblock": true, 00:10:52.869 "num_base_bdevs": 4, 00:10:52.869 "num_base_bdevs_discovered": 2, 00:10:52.869 "num_base_bdevs_operational": 4, 00:10:52.869 "base_bdevs_list": [ 00:10:52.869 { 00:10:52.869 "name": "BaseBdev1", 00:10:52.869 "uuid": "afb52f4b-07a6-4adf-9392-3337efe7e2ee", 00:10:52.869 "is_configured": true, 00:10:52.869 "data_offset": 2048, 00:10:52.869 "data_size": 63488 00:10:52.869 }, 00:10:52.869 { 00:10:52.869 "name": "BaseBdev2", 00:10:52.869 "uuid": "62508f06-eec9-402a-8925-569b87893ba7", 00:10:52.869 "is_configured": true, 00:10:52.869 "data_offset": 2048, 00:10:52.869 "data_size": 63488 00:10:52.869 }, 00:10:52.869 { 00:10:52.869 "name": "BaseBdev3", 00:10:52.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.869 "is_configured": false, 00:10:52.869 "data_offset": 0, 00:10:52.869 "data_size": 0 00:10:52.869 }, 00:10:52.869 { 00:10:52.869 "name": "BaseBdev4", 00:10:52.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.869 "is_configured": false, 00:10:52.869 "data_offset": 0, 00:10:52.869 "data_size": 0 00:10:52.869 } 00:10:52.869 ] 00:10:52.869 }' 00:10:52.869 09:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.869 09:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.128 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:53.128 09:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.128 09:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.387 [2024-12-06 09:48:18.429520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:53.387 BaseBdev3 00:10:53.387 09:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.387 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:53.387 09:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:53.387 09:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:53.387 09:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:53.387 09:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:53.387 09:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:53.387 09:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:53.387 09:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.387 09:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.387 09:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.387 09:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:53.387 09:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.387 09:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.387 [ 00:10:53.387 { 00:10:53.387 "name": "BaseBdev3", 00:10:53.387 "aliases": [ 00:10:53.387 "d57a1430-d6e4-477a-9cad-2c596b1bc73b" 00:10:53.387 ], 00:10:53.387 "product_name": "Malloc disk", 00:10:53.388 "block_size": 512, 00:10:53.388 "num_blocks": 65536, 00:10:53.388 "uuid": "d57a1430-d6e4-477a-9cad-2c596b1bc73b", 00:10:53.388 "assigned_rate_limits": { 00:10:53.388 "rw_ios_per_sec": 0, 00:10:53.388 "rw_mbytes_per_sec": 0, 00:10:53.388 "r_mbytes_per_sec": 0, 00:10:53.388 "w_mbytes_per_sec": 0 00:10:53.388 }, 00:10:53.388 "claimed": true, 00:10:53.388 "claim_type": "exclusive_write", 00:10:53.388 "zoned": false, 00:10:53.388 "supported_io_types": { 00:10:53.388 "read": true, 00:10:53.388 "write": true, 00:10:53.388 "unmap": true, 00:10:53.388 "flush": true, 00:10:53.388 "reset": true, 00:10:53.388 "nvme_admin": false, 00:10:53.388 "nvme_io": false, 00:10:53.388 "nvme_io_md": false, 00:10:53.388 "write_zeroes": true, 00:10:53.388 "zcopy": true, 00:10:53.388 "get_zone_info": false, 00:10:53.388 "zone_management": false, 00:10:53.388 "zone_append": false, 00:10:53.388 "compare": false, 00:10:53.388 "compare_and_write": false, 00:10:53.388 "abort": true, 00:10:53.388 "seek_hole": false, 00:10:53.388 "seek_data": false, 00:10:53.388 "copy": true, 00:10:53.388 "nvme_iov_md": false 00:10:53.388 }, 00:10:53.388 "memory_domains": [ 00:10:53.388 { 00:10:53.388 "dma_device_id": "system", 00:10:53.388 "dma_device_type": 1 00:10:53.388 }, 00:10:53.388 { 00:10:53.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.388 "dma_device_type": 2 00:10:53.388 } 00:10:53.388 ], 00:10:53.388 "driver_specific": {} 00:10:53.388 } 00:10:53.388 ] 00:10:53.388 09:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.388 09:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:53.388 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:53.388 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:53.388 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:53.388 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.388 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.388 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.388 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.388 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.388 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.388 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.388 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.388 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.388 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.388 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.388 09:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.388 09:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.388 09:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.388 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.388 "name": "Existed_Raid", 00:10:53.388 "uuid": "2beb83eb-56e2-4f04-be27-f3746a8d45dc", 00:10:53.388 "strip_size_kb": 64, 00:10:53.388 "state": "configuring", 00:10:53.388 "raid_level": "concat", 00:10:53.388 "superblock": true, 00:10:53.388 "num_base_bdevs": 4, 00:10:53.388 "num_base_bdevs_discovered": 3, 00:10:53.388 "num_base_bdevs_operational": 4, 00:10:53.388 "base_bdevs_list": [ 00:10:53.388 { 00:10:53.388 "name": "BaseBdev1", 00:10:53.388 "uuid": "afb52f4b-07a6-4adf-9392-3337efe7e2ee", 00:10:53.388 "is_configured": true, 00:10:53.388 "data_offset": 2048, 00:10:53.388 "data_size": 63488 00:10:53.388 }, 00:10:53.388 { 00:10:53.388 "name": "BaseBdev2", 00:10:53.388 "uuid": "62508f06-eec9-402a-8925-569b87893ba7", 00:10:53.388 "is_configured": true, 00:10:53.388 "data_offset": 2048, 00:10:53.388 "data_size": 63488 00:10:53.388 }, 00:10:53.388 { 00:10:53.388 "name": "BaseBdev3", 00:10:53.388 "uuid": "d57a1430-d6e4-477a-9cad-2c596b1bc73b", 00:10:53.388 "is_configured": true, 00:10:53.388 "data_offset": 2048, 00:10:53.388 "data_size": 63488 00:10:53.388 }, 00:10:53.388 { 00:10:53.388 "name": "BaseBdev4", 00:10:53.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.388 "is_configured": false, 00:10:53.388 "data_offset": 0, 00:10:53.388 "data_size": 0 00:10:53.388 } 00:10:53.388 ] 00:10:53.388 }' 00:10:53.388 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.388 09:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.958 09:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:53.958 09:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.958 09:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.958 [2024-12-06 09:48:19.006542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:53.958 [2024-12-06 09:48:19.006915] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:53.958 [2024-12-06 09:48:19.006970] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:53.958 [2024-12-06 09:48:19.007298] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:53.958 [2024-12-06 09:48:19.007491] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:53.958 BaseBdev4 00:10:53.958 [2024-12-06 09:48:19.007541] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:53.958 [2024-12-06 09:48:19.007731] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.958 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.958 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:53.958 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:53.958 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:53.958 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:53.958 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:53.958 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:53.958 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:53.958 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.958 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.958 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.958 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:53.958 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.958 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.958 [ 00:10:53.958 { 00:10:53.958 "name": "BaseBdev4", 00:10:53.958 "aliases": [ 00:10:53.958 "e27e1362-e1b2-481d-9768-cbe6f2798849" 00:10:53.958 ], 00:10:53.958 "product_name": "Malloc disk", 00:10:53.958 "block_size": 512, 00:10:53.958 "num_blocks": 65536, 00:10:53.958 "uuid": "e27e1362-e1b2-481d-9768-cbe6f2798849", 00:10:53.958 "assigned_rate_limits": { 00:10:53.958 "rw_ios_per_sec": 0, 00:10:53.958 "rw_mbytes_per_sec": 0, 00:10:53.958 "r_mbytes_per_sec": 0, 00:10:53.958 "w_mbytes_per_sec": 0 00:10:53.958 }, 00:10:53.958 "claimed": true, 00:10:53.958 "claim_type": "exclusive_write", 00:10:53.958 "zoned": false, 00:10:53.958 "supported_io_types": { 00:10:53.958 "read": true, 00:10:53.958 "write": true, 00:10:53.958 "unmap": true, 00:10:53.958 "flush": true, 00:10:53.958 "reset": true, 00:10:53.958 "nvme_admin": false, 00:10:53.958 "nvme_io": false, 00:10:53.958 "nvme_io_md": false, 00:10:53.958 "write_zeroes": true, 00:10:53.958 "zcopy": true, 00:10:53.958 "get_zone_info": false, 00:10:53.958 "zone_management": false, 00:10:53.958 "zone_append": false, 00:10:53.958 "compare": false, 00:10:53.958 "compare_and_write": false, 00:10:53.958 "abort": true, 00:10:53.958 "seek_hole": false, 00:10:53.958 "seek_data": false, 00:10:53.958 "copy": true, 00:10:53.958 "nvme_iov_md": false 00:10:53.958 }, 00:10:53.958 "memory_domains": [ 00:10:53.958 { 00:10:53.958 "dma_device_id": "system", 00:10:53.958 "dma_device_type": 1 00:10:53.958 }, 00:10:53.958 { 00:10:53.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.958 "dma_device_type": 2 00:10:53.958 } 00:10:53.958 ], 00:10:53.958 "driver_specific": {} 00:10:53.958 } 00:10:53.958 ] 00:10:53.958 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.958 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:53.958 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:53.958 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:53.958 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:53.958 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.958 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.958 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.958 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.959 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.959 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.959 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.959 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.959 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.959 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.959 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.959 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.959 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.959 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.959 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.959 "name": "Existed_Raid", 00:10:53.959 "uuid": "2beb83eb-56e2-4f04-be27-f3746a8d45dc", 00:10:53.959 "strip_size_kb": 64, 00:10:53.959 "state": "online", 00:10:53.959 "raid_level": "concat", 00:10:53.959 "superblock": true, 00:10:53.959 "num_base_bdevs": 4, 00:10:53.959 "num_base_bdevs_discovered": 4, 00:10:53.959 "num_base_bdevs_operational": 4, 00:10:53.959 "base_bdevs_list": [ 00:10:53.959 { 00:10:53.959 "name": "BaseBdev1", 00:10:53.959 "uuid": "afb52f4b-07a6-4adf-9392-3337efe7e2ee", 00:10:53.959 "is_configured": true, 00:10:53.959 "data_offset": 2048, 00:10:53.959 "data_size": 63488 00:10:53.959 }, 00:10:53.959 { 00:10:53.959 "name": "BaseBdev2", 00:10:53.959 "uuid": "62508f06-eec9-402a-8925-569b87893ba7", 00:10:53.959 "is_configured": true, 00:10:53.959 "data_offset": 2048, 00:10:53.959 "data_size": 63488 00:10:53.959 }, 00:10:53.959 { 00:10:53.959 "name": "BaseBdev3", 00:10:53.959 "uuid": "d57a1430-d6e4-477a-9cad-2c596b1bc73b", 00:10:53.959 "is_configured": true, 00:10:53.959 "data_offset": 2048, 00:10:53.959 "data_size": 63488 00:10:53.959 }, 00:10:53.959 { 00:10:53.959 "name": "BaseBdev4", 00:10:53.959 "uuid": "e27e1362-e1b2-481d-9768-cbe6f2798849", 00:10:53.959 "is_configured": true, 00:10:53.959 "data_offset": 2048, 00:10:53.959 "data_size": 63488 00:10:53.959 } 00:10:53.959 ] 00:10:53.959 }' 00:10:53.959 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.959 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.220 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:54.220 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:54.220 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:54.220 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:54.220 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:54.220 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:54.220 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:54.220 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:54.220 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.220 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.220 [2024-12-06 09:48:19.482138] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:54.481 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.481 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:54.481 "name": "Existed_Raid", 00:10:54.481 "aliases": [ 00:10:54.481 "2beb83eb-56e2-4f04-be27-f3746a8d45dc" 00:10:54.481 ], 00:10:54.481 "product_name": "Raid Volume", 00:10:54.481 "block_size": 512, 00:10:54.481 "num_blocks": 253952, 00:10:54.481 "uuid": "2beb83eb-56e2-4f04-be27-f3746a8d45dc", 00:10:54.481 "assigned_rate_limits": { 00:10:54.481 "rw_ios_per_sec": 0, 00:10:54.481 "rw_mbytes_per_sec": 0, 00:10:54.481 "r_mbytes_per_sec": 0, 00:10:54.481 "w_mbytes_per_sec": 0 00:10:54.481 }, 00:10:54.481 "claimed": false, 00:10:54.481 "zoned": false, 00:10:54.481 "supported_io_types": { 00:10:54.481 "read": true, 00:10:54.481 "write": true, 00:10:54.481 "unmap": true, 00:10:54.481 "flush": true, 00:10:54.481 "reset": true, 00:10:54.481 "nvme_admin": false, 00:10:54.481 "nvme_io": false, 00:10:54.481 "nvme_io_md": false, 00:10:54.481 "write_zeroes": true, 00:10:54.481 "zcopy": false, 00:10:54.481 "get_zone_info": false, 00:10:54.481 "zone_management": false, 00:10:54.481 "zone_append": false, 00:10:54.481 "compare": false, 00:10:54.481 "compare_and_write": false, 00:10:54.481 "abort": false, 00:10:54.481 "seek_hole": false, 00:10:54.481 "seek_data": false, 00:10:54.481 "copy": false, 00:10:54.481 "nvme_iov_md": false 00:10:54.481 }, 00:10:54.481 "memory_domains": [ 00:10:54.481 { 00:10:54.481 "dma_device_id": "system", 00:10:54.481 "dma_device_type": 1 00:10:54.481 }, 00:10:54.481 { 00:10:54.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.481 "dma_device_type": 2 00:10:54.481 }, 00:10:54.481 { 00:10:54.481 "dma_device_id": "system", 00:10:54.481 "dma_device_type": 1 00:10:54.481 }, 00:10:54.481 { 00:10:54.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.481 "dma_device_type": 2 00:10:54.481 }, 00:10:54.481 { 00:10:54.481 "dma_device_id": "system", 00:10:54.481 "dma_device_type": 1 00:10:54.481 }, 00:10:54.481 { 00:10:54.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.481 "dma_device_type": 2 00:10:54.481 }, 00:10:54.481 { 00:10:54.481 "dma_device_id": "system", 00:10:54.481 "dma_device_type": 1 00:10:54.481 }, 00:10:54.481 { 00:10:54.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.481 "dma_device_type": 2 00:10:54.481 } 00:10:54.481 ], 00:10:54.481 "driver_specific": { 00:10:54.481 "raid": { 00:10:54.481 "uuid": "2beb83eb-56e2-4f04-be27-f3746a8d45dc", 00:10:54.481 "strip_size_kb": 64, 00:10:54.481 "state": "online", 00:10:54.481 "raid_level": "concat", 00:10:54.481 "superblock": true, 00:10:54.481 "num_base_bdevs": 4, 00:10:54.481 "num_base_bdevs_discovered": 4, 00:10:54.481 "num_base_bdevs_operational": 4, 00:10:54.481 "base_bdevs_list": [ 00:10:54.481 { 00:10:54.481 "name": "BaseBdev1", 00:10:54.481 "uuid": "afb52f4b-07a6-4adf-9392-3337efe7e2ee", 00:10:54.481 "is_configured": true, 00:10:54.481 "data_offset": 2048, 00:10:54.481 "data_size": 63488 00:10:54.481 }, 00:10:54.481 { 00:10:54.481 "name": "BaseBdev2", 00:10:54.481 "uuid": "62508f06-eec9-402a-8925-569b87893ba7", 00:10:54.481 "is_configured": true, 00:10:54.481 "data_offset": 2048, 00:10:54.481 "data_size": 63488 00:10:54.481 }, 00:10:54.481 { 00:10:54.481 "name": "BaseBdev3", 00:10:54.481 "uuid": "d57a1430-d6e4-477a-9cad-2c596b1bc73b", 00:10:54.481 "is_configured": true, 00:10:54.481 "data_offset": 2048, 00:10:54.481 "data_size": 63488 00:10:54.481 }, 00:10:54.481 { 00:10:54.481 "name": "BaseBdev4", 00:10:54.481 "uuid": "e27e1362-e1b2-481d-9768-cbe6f2798849", 00:10:54.481 "is_configured": true, 00:10:54.481 "data_offset": 2048, 00:10:54.481 "data_size": 63488 00:10:54.481 } 00:10:54.481 ] 00:10:54.481 } 00:10:54.481 } 00:10:54.481 }' 00:10:54.481 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:54.481 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:54.481 BaseBdev2 00:10:54.481 BaseBdev3 00:10:54.481 BaseBdev4' 00:10:54.481 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.481 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:54.481 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.481 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:54.481 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.481 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.481 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.481 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.482 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.482 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.482 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.482 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:54.482 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.482 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.482 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.482 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.482 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.482 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.482 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.482 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:54.482 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.482 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.482 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.482 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.742 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.742 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.742 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.742 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:54.742 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.742 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.742 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.742 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.742 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.742 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.742 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:54.742 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.742 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.742 [2024-12-06 09:48:19.813292] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:54.742 [2024-12-06 09:48:19.813365] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:54.742 [2024-12-06 09:48:19.813467] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:54.742 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.742 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:54.742 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:54.742 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:54.742 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:54.742 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:54.742 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:54.742 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.742 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:54.742 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.742 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.742 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:54.742 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.742 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.742 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.742 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.742 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.742 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.742 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.742 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.742 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.742 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.742 "name": "Existed_Raid", 00:10:54.742 "uuid": "2beb83eb-56e2-4f04-be27-f3746a8d45dc", 00:10:54.742 "strip_size_kb": 64, 00:10:54.742 "state": "offline", 00:10:54.742 "raid_level": "concat", 00:10:54.742 "superblock": true, 00:10:54.742 "num_base_bdevs": 4, 00:10:54.742 "num_base_bdevs_discovered": 3, 00:10:54.742 "num_base_bdevs_operational": 3, 00:10:54.742 "base_bdevs_list": [ 00:10:54.742 { 00:10:54.742 "name": null, 00:10:54.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.742 "is_configured": false, 00:10:54.742 "data_offset": 0, 00:10:54.742 "data_size": 63488 00:10:54.742 }, 00:10:54.742 { 00:10:54.742 "name": "BaseBdev2", 00:10:54.742 "uuid": "62508f06-eec9-402a-8925-569b87893ba7", 00:10:54.742 "is_configured": true, 00:10:54.742 "data_offset": 2048, 00:10:54.743 "data_size": 63488 00:10:54.743 }, 00:10:54.743 { 00:10:54.743 "name": "BaseBdev3", 00:10:54.743 "uuid": "d57a1430-d6e4-477a-9cad-2c596b1bc73b", 00:10:54.743 "is_configured": true, 00:10:54.743 "data_offset": 2048, 00:10:54.743 "data_size": 63488 00:10:54.743 }, 00:10:54.743 { 00:10:54.743 "name": "BaseBdev4", 00:10:54.743 "uuid": "e27e1362-e1b2-481d-9768-cbe6f2798849", 00:10:54.743 "is_configured": true, 00:10:54.743 "data_offset": 2048, 00:10:54.743 "data_size": 63488 00:10:54.743 } 00:10:54.743 ] 00:10:54.743 }' 00:10:54.743 09:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.743 09:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.312 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:55.312 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:55.313 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.313 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.313 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.313 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:55.313 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.313 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:55.313 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:55.313 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:55.313 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.313 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.313 [2024-12-06 09:48:20.356788] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:55.313 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.313 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:55.313 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:55.313 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:55.313 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.313 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.313 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.313 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.313 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:55.313 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:55.313 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:55.313 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.313 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.313 [2024-12-06 09:48:20.507137] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:55.573 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.573 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:55.573 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:55.573 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.573 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:55.573 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.573 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.573 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.573 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:55.573 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:55.573 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:55.573 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.573 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.573 [2024-12-06 09:48:20.673330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:55.573 [2024-12-06 09:48:20.673440] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:55.573 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.573 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:55.573 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:55.573 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:55.573 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.573 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.573 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.573 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.573 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:55.573 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:55.573 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:55.573 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:55.573 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:55.573 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:55.573 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.573 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.833 BaseBdev2 00:10:55.833 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.833 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:55.833 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:55.833 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:55.833 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:55.833 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:55.833 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:55.833 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:55.833 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.833 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.833 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.833 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:55.833 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.833 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.833 [ 00:10:55.833 { 00:10:55.833 "name": "BaseBdev2", 00:10:55.833 "aliases": [ 00:10:55.833 "110446ff-263e-49a0-b1ed-6b8cd89e5f36" 00:10:55.833 ], 00:10:55.833 "product_name": "Malloc disk", 00:10:55.833 "block_size": 512, 00:10:55.833 "num_blocks": 65536, 00:10:55.833 "uuid": "110446ff-263e-49a0-b1ed-6b8cd89e5f36", 00:10:55.833 "assigned_rate_limits": { 00:10:55.833 "rw_ios_per_sec": 0, 00:10:55.833 "rw_mbytes_per_sec": 0, 00:10:55.833 "r_mbytes_per_sec": 0, 00:10:55.833 "w_mbytes_per_sec": 0 00:10:55.833 }, 00:10:55.833 "claimed": false, 00:10:55.833 "zoned": false, 00:10:55.833 "supported_io_types": { 00:10:55.833 "read": true, 00:10:55.833 "write": true, 00:10:55.833 "unmap": true, 00:10:55.833 "flush": true, 00:10:55.833 "reset": true, 00:10:55.833 "nvme_admin": false, 00:10:55.833 "nvme_io": false, 00:10:55.833 "nvme_io_md": false, 00:10:55.833 "write_zeroes": true, 00:10:55.833 "zcopy": true, 00:10:55.833 "get_zone_info": false, 00:10:55.833 "zone_management": false, 00:10:55.833 "zone_append": false, 00:10:55.833 "compare": false, 00:10:55.833 "compare_and_write": false, 00:10:55.833 "abort": true, 00:10:55.833 "seek_hole": false, 00:10:55.833 "seek_data": false, 00:10:55.833 "copy": true, 00:10:55.833 "nvme_iov_md": false 00:10:55.833 }, 00:10:55.833 "memory_domains": [ 00:10:55.833 { 00:10:55.833 "dma_device_id": "system", 00:10:55.833 "dma_device_type": 1 00:10:55.833 }, 00:10:55.833 { 00:10:55.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.833 "dma_device_type": 2 00:10:55.833 } 00:10:55.833 ], 00:10:55.833 "driver_specific": {} 00:10:55.833 } 00:10:55.833 ] 00:10:55.833 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.833 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:55.833 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:55.833 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:55.833 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:55.833 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.833 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.833 BaseBdev3 00:10:55.833 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.833 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:55.833 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:55.833 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:55.833 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:55.833 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:55.833 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:55.833 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:55.833 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.833 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.833 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.833 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:55.833 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.833 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.833 [ 00:10:55.833 { 00:10:55.833 "name": "BaseBdev3", 00:10:55.833 "aliases": [ 00:10:55.833 "ef263986-de73-46db-9e11-2731d468a980" 00:10:55.833 ], 00:10:55.833 "product_name": "Malloc disk", 00:10:55.833 "block_size": 512, 00:10:55.833 "num_blocks": 65536, 00:10:55.833 "uuid": "ef263986-de73-46db-9e11-2731d468a980", 00:10:55.833 "assigned_rate_limits": { 00:10:55.833 "rw_ios_per_sec": 0, 00:10:55.833 "rw_mbytes_per_sec": 0, 00:10:55.833 "r_mbytes_per_sec": 0, 00:10:55.833 "w_mbytes_per_sec": 0 00:10:55.833 }, 00:10:55.833 "claimed": false, 00:10:55.833 "zoned": false, 00:10:55.833 "supported_io_types": { 00:10:55.833 "read": true, 00:10:55.833 "write": true, 00:10:55.833 "unmap": true, 00:10:55.833 "flush": true, 00:10:55.833 "reset": true, 00:10:55.833 "nvme_admin": false, 00:10:55.833 "nvme_io": false, 00:10:55.833 "nvme_io_md": false, 00:10:55.833 "write_zeroes": true, 00:10:55.833 "zcopy": true, 00:10:55.833 "get_zone_info": false, 00:10:55.833 "zone_management": false, 00:10:55.833 "zone_append": false, 00:10:55.833 "compare": false, 00:10:55.833 "compare_and_write": false, 00:10:55.833 "abort": true, 00:10:55.834 "seek_hole": false, 00:10:55.834 "seek_data": false, 00:10:55.834 "copy": true, 00:10:55.834 "nvme_iov_md": false 00:10:55.834 }, 00:10:55.834 "memory_domains": [ 00:10:55.834 { 00:10:55.834 "dma_device_id": "system", 00:10:55.834 "dma_device_type": 1 00:10:55.834 }, 00:10:55.834 { 00:10:55.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.834 "dma_device_type": 2 00:10:55.834 } 00:10:55.834 ], 00:10:55.834 "driver_specific": {} 00:10:55.834 } 00:10:55.834 ] 00:10:55.834 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.834 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:55.834 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:55.834 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:55.834 09:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:55.834 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.834 09:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.834 BaseBdev4 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.834 [ 00:10:55.834 { 00:10:55.834 "name": "BaseBdev4", 00:10:55.834 "aliases": [ 00:10:55.834 "56c65c6b-6685-49c7-bd8f-92bef6f5468e" 00:10:55.834 ], 00:10:55.834 "product_name": "Malloc disk", 00:10:55.834 "block_size": 512, 00:10:55.834 "num_blocks": 65536, 00:10:55.834 "uuid": "56c65c6b-6685-49c7-bd8f-92bef6f5468e", 00:10:55.834 "assigned_rate_limits": { 00:10:55.834 "rw_ios_per_sec": 0, 00:10:55.834 "rw_mbytes_per_sec": 0, 00:10:55.834 "r_mbytes_per_sec": 0, 00:10:55.834 "w_mbytes_per_sec": 0 00:10:55.834 }, 00:10:55.834 "claimed": false, 00:10:55.834 "zoned": false, 00:10:55.834 "supported_io_types": { 00:10:55.834 "read": true, 00:10:55.834 "write": true, 00:10:55.834 "unmap": true, 00:10:55.834 "flush": true, 00:10:55.834 "reset": true, 00:10:55.834 "nvme_admin": false, 00:10:55.834 "nvme_io": false, 00:10:55.834 "nvme_io_md": false, 00:10:55.834 "write_zeroes": true, 00:10:55.834 "zcopy": true, 00:10:55.834 "get_zone_info": false, 00:10:55.834 "zone_management": false, 00:10:55.834 "zone_append": false, 00:10:55.834 "compare": false, 00:10:55.834 "compare_and_write": false, 00:10:55.834 "abort": true, 00:10:55.834 "seek_hole": false, 00:10:55.834 "seek_data": false, 00:10:55.834 "copy": true, 00:10:55.834 "nvme_iov_md": false 00:10:55.834 }, 00:10:55.834 "memory_domains": [ 00:10:55.834 { 00:10:55.834 "dma_device_id": "system", 00:10:55.834 "dma_device_type": 1 00:10:55.834 }, 00:10:55.834 { 00:10:55.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.834 "dma_device_type": 2 00:10:55.834 } 00:10:55.834 ], 00:10:55.834 "driver_specific": {} 00:10:55.834 } 00:10:55.834 ] 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.834 [2024-12-06 09:48:21.071112] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:55.834 [2024-12-06 09:48:21.071218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:55.834 [2024-12-06 09:48:21.071264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:55.834 [2024-12-06 09:48:21.073238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:55.834 [2024-12-06 09:48:21.073333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.834 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.094 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.094 "name": "Existed_Raid", 00:10:56.094 "uuid": "7c4a14d9-6414-4761-8684-eb0e426046e6", 00:10:56.094 "strip_size_kb": 64, 00:10:56.094 "state": "configuring", 00:10:56.094 "raid_level": "concat", 00:10:56.094 "superblock": true, 00:10:56.094 "num_base_bdevs": 4, 00:10:56.094 "num_base_bdevs_discovered": 3, 00:10:56.094 "num_base_bdevs_operational": 4, 00:10:56.094 "base_bdevs_list": [ 00:10:56.094 { 00:10:56.094 "name": "BaseBdev1", 00:10:56.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.094 "is_configured": false, 00:10:56.094 "data_offset": 0, 00:10:56.094 "data_size": 0 00:10:56.094 }, 00:10:56.094 { 00:10:56.094 "name": "BaseBdev2", 00:10:56.094 "uuid": "110446ff-263e-49a0-b1ed-6b8cd89e5f36", 00:10:56.094 "is_configured": true, 00:10:56.094 "data_offset": 2048, 00:10:56.094 "data_size": 63488 00:10:56.094 }, 00:10:56.094 { 00:10:56.094 "name": "BaseBdev3", 00:10:56.094 "uuid": "ef263986-de73-46db-9e11-2731d468a980", 00:10:56.094 "is_configured": true, 00:10:56.094 "data_offset": 2048, 00:10:56.095 "data_size": 63488 00:10:56.095 }, 00:10:56.095 { 00:10:56.095 "name": "BaseBdev4", 00:10:56.095 "uuid": "56c65c6b-6685-49c7-bd8f-92bef6f5468e", 00:10:56.095 "is_configured": true, 00:10:56.095 "data_offset": 2048, 00:10:56.095 "data_size": 63488 00:10:56.095 } 00:10:56.095 ] 00:10:56.095 }' 00:10:56.095 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.095 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.355 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:56.355 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.355 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.355 [2024-12-06 09:48:21.526327] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:56.355 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.355 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:56.355 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.355 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.355 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.355 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.355 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.355 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.355 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.355 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.355 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.355 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.355 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.355 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.355 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.355 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.355 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.355 "name": "Existed_Raid", 00:10:56.355 "uuid": "7c4a14d9-6414-4761-8684-eb0e426046e6", 00:10:56.355 "strip_size_kb": 64, 00:10:56.355 "state": "configuring", 00:10:56.355 "raid_level": "concat", 00:10:56.355 "superblock": true, 00:10:56.355 "num_base_bdevs": 4, 00:10:56.355 "num_base_bdevs_discovered": 2, 00:10:56.355 "num_base_bdevs_operational": 4, 00:10:56.355 "base_bdevs_list": [ 00:10:56.355 { 00:10:56.355 "name": "BaseBdev1", 00:10:56.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.355 "is_configured": false, 00:10:56.355 "data_offset": 0, 00:10:56.355 "data_size": 0 00:10:56.355 }, 00:10:56.355 { 00:10:56.355 "name": null, 00:10:56.355 "uuid": "110446ff-263e-49a0-b1ed-6b8cd89e5f36", 00:10:56.355 "is_configured": false, 00:10:56.355 "data_offset": 0, 00:10:56.355 "data_size": 63488 00:10:56.355 }, 00:10:56.355 { 00:10:56.355 "name": "BaseBdev3", 00:10:56.355 "uuid": "ef263986-de73-46db-9e11-2731d468a980", 00:10:56.355 "is_configured": true, 00:10:56.355 "data_offset": 2048, 00:10:56.355 "data_size": 63488 00:10:56.355 }, 00:10:56.355 { 00:10:56.355 "name": "BaseBdev4", 00:10:56.355 "uuid": "56c65c6b-6685-49c7-bd8f-92bef6f5468e", 00:10:56.355 "is_configured": true, 00:10:56.355 "data_offset": 2048, 00:10:56.355 "data_size": 63488 00:10:56.355 } 00:10:56.355 ] 00:10:56.355 }' 00:10:56.355 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.355 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.925 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.925 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.925 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:56.925 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.925 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.925 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:56.925 09:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:56.925 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.925 09:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.925 [2024-12-06 09:48:22.010923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:56.925 BaseBdev1 00:10:56.925 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.925 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:56.925 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:56.925 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:56.925 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:56.925 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:56.925 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:56.925 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:56.925 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.925 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.925 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.925 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:56.925 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.925 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.925 [ 00:10:56.925 { 00:10:56.925 "name": "BaseBdev1", 00:10:56.925 "aliases": [ 00:10:56.925 "7629936d-e5c2-445d-850a-7ad51369fde7" 00:10:56.925 ], 00:10:56.925 "product_name": "Malloc disk", 00:10:56.925 "block_size": 512, 00:10:56.925 "num_blocks": 65536, 00:10:56.925 "uuid": "7629936d-e5c2-445d-850a-7ad51369fde7", 00:10:56.925 "assigned_rate_limits": { 00:10:56.925 "rw_ios_per_sec": 0, 00:10:56.925 "rw_mbytes_per_sec": 0, 00:10:56.925 "r_mbytes_per_sec": 0, 00:10:56.925 "w_mbytes_per_sec": 0 00:10:56.925 }, 00:10:56.925 "claimed": true, 00:10:56.925 "claim_type": "exclusive_write", 00:10:56.925 "zoned": false, 00:10:56.925 "supported_io_types": { 00:10:56.925 "read": true, 00:10:56.925 "write": true, 00:10:56.925 "unmap": true, 00:10:56.925 "flush": true, 00:10:56.925 "reset": true, 00:10:56.925 "nvme_admin": false, 00:10:56.925 "nvme_io": false, 00:10:56.925 "nvme_io_md": false, 00:10:56.925 "write_zeroes": true, 00:10:56.925 "zcopy": true, 00:10:56.925 "get_zone_info": false, 00:10:56.925 "zone_management": false, 00:10:56.925 "zone_append": false, 00:10:56.925 "compare": false, 00:10:56.925 "compare_and_write": false, 00:10:56.925 "abort": true, 00:10:56.925 "seek_hole": false, 00:10:56.925 "seek_data": false, 00:10:56.925 "copy": true, 00:10:56.925 "nvme_iov_md": false 00:10:56.925 }, 00:10:56.925 "memory_domains": [ 00:10:56.925 { 00:10:56.925 "dma_device_id": "system", 00:10:56.925 "dma_device_type": 1 00:10:56.925 }, 00:10:56.925 { 00:10:56.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.925 "dma_device_type": 2 00:10:56.925 } 00:10:56.925 ], 00:10:56.925 "driver_specific": {} 00:10:56.925 } 00:10:56.925 ] 00:10:56.925 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.925 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:56.925 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:56.925 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.925 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.925 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.925 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.925 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.925 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.925 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.925 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.925 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.926 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.926 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.926 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.926 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.926 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.926 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.926 "name": "Existed_Raid", 00:10:56.926 "uuid": "7c4a14d9-6414-4761-8684-eb0e426046e6", 00:10:56.926 "strip_size_kb": 64, 00:10:56.926 "state": "configuring", 00:10:56.926 "raid_level": "concat", 00:10:56.926 "superblock": true, 00:10:56.926 "num_base_bdevs": 4, 00:10:56.926 "num_base_bdevs_discovered": 3, 00:10:56.926 "num_base_bdevs_operational": 4, 00:10:56.926 "base_bdevs_list": [ 00:10:56.926 { 00:10:56.926 "name": "BaseBdev1", 00:10:56.926 "uuid": "7629936d-e5c2-445d-850a-7ad51369fde7", 00:10:56.926 "is_configured": true, 00:10:56.926 "data_offset": 2048, 00:10:56.926 "data_size": 63488 00:10:56.926 }, 00:10:56.926 { 00:10:56.926 "name": null, 00:10:56.926 "uuid": "110446ff-263e-49a0-b1ed-6b8cd89e5f36", 00:10:56.926 "is_configured": false, 00:10:56.926 "data_offset": 0, 00:10:56.926 "data_size": 63488 00:10:56.926 }, 00:10:56.926 { 00:10:56.926 "name": "BaseBdev3", 00:10:56.926 "uuid": "ef263986-de73-46db-9e11-2731d468a980", 00:10:56.926 "is_configured": true, 00:10:56.926 "data_offset": 2048, 00:10:56.926 "data_size": 63488 00:10:56.926 }, 00:10:56.926 { 00:10:56.926 "name": "BaseBdev4", 00:10:56.926 "uuid": "56c65c6b-6685-49c7-bd8f-92bef6f5468e", 00:10:56.926 "is_configured": true, 00:10:56.926 "data_offset": 2048, 00:10:56.926 "data_size": 63488 00:10:56.926 } 00:10:56.926 ] 00:10:56.926 }' 00:10:56.926 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.926 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.494 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.494 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:57.494 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.494 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.494 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.494 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:57.494 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:57.494 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.495 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.495 [2024-12-06 09:48:22.538121] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:57.495 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.495 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:57.495 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.495 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.495 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:57.495 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.495 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.495 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.495 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.495 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.495 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.495 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.495 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.495 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.495 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.495 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.495 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.495 "name": "Existed_Raid", 00:10:57.495 "uuid": "7c4a14d9-6414-4761-8684-eb0e426046e6", 00:10:57.495 "strip_size_kb": 64, 00:10:57.495 "state": "configuring", 00:10:57.495 "raid_level": "concat", 00:10:57.495 "superblock": true, 00:10:57.495 "num_base_bdevs": 4, 00:10:57.495 "num_base_bdevs_discovered": 2, 00:10:57.495 "num_base_bdevs_operational": 4, 00:10:57.495 "base_bdevs_list": [ 00:10:57.495 { 00:10:57.495 "name": "BaseBdev1", 00:10:57.495 "uuid": "7629936d-e5c2-445d-850a-7ad51369fde7", 00:10:57.495 "is_configured": true, 00:10:57.495 "data_offset": 2048, 00:10:57.495 "data_size": 63488 00:10:57.495 }, 00:10:57.495 { 00:10:57.495 "name": null, 00:10:57.495 "uuid": "110446ff-263e-49a0-b1ed-6b8cd89e5f36", 00:10:57.495 "is_configured": false, 00:10:57.495 "data_offset": 0, 00:10:57.495 "data_size": 63488 00:10:57.495 }, 00:10:57.495 { 00:10:57.495 "name": null, 00:10:57.495 "uuid": "ef263986-de73-46db-9e11-2731d468a980", 00:10:57.495 "is_configured": false, 00:10:57.495 "data_offset": 0, 00:10:57.495 "data_size": 63488 00:10:57.495 }, 00:10:57.495 { 00:10:57.495 "name": "BaseBdev4", 00:10:57.495 "uuid": "56c65c6b-6685-49c7-bd8f-92bef6f5468e", 00:10:57.495 "is_configured": true, 00:10:57.495 "data_offset": 2048, 00:10:57.495 "data_size": 63488 00:10:57.495 } 00:10:57.495 ] 00:10:57.495 }' 00:10:57.495 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.495 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.755 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:57.755 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.755 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.755 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.755 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.755 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:57.755 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:57.755 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.755 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.755 [2024-12-06 09:48:23.005315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:57.755 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.755 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:57.755 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.755 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.755 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:57.755 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.755 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.755 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.755 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.755 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.755 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.755 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.755 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.755 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.755 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.016 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.016 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.016 "name": "Existed_Raid", 00:10:58.016 "uuid": "7c4a14d9-6414-4761-8684-eb0e426046e6", 00:10:58.016 "strip_size_kb": 64, 00:10:58.016 "state": "configuring", 00:10:58.016 "raid_level": "concat", 00:10:58.016 "superblock": true, 00:10:58.016 "num_base_bdevs": 4, 00:10:58.016 "num_base_bdevs_discovered": 3, 00:10:58.016 "num_base_bdevs_operational": 4, 00:10:58.016 "base_bdevs_list": [ 00:10:58.016 { 00:10:58.016 "name": "BaseBdev1", 00:10:58.016 "uuid": "7629936d-e5c2-445d-850a-7ad51369fde7", 00:10:58.016 "is_configured": true, 00:10:58.016 "data_offset": 2048, 00:10:58.016 "data_size": 63488 00:10:58.016 }, 00:10:58.016 { 00:10:58.016 "name": null, 00:10:58.016 "uuid": "110446ff-263e-49a0-b1ed-6b8cd89e5f36", 00:10:58.016 "is_configured": false, 00:10:58.016 "data_offset": 0, 00:10:58.016 "data_size": 63488 00:10:58.016 }, 00:10:58.016 { 00:10:58.016 "name": "BaseBdev3", 00:10:58.016 "uuid": "ef263986-de73-46db-9e11-2731d468a980", 00:10:58.016 "is_configured": true, 00:10:58.016 "data_offset": 2048, 00:10:58.016 "data_size": 63488 00:10:58.016 }, 00:10:58.016 { 00:10:58.016 "name": "BaseBdev4", 00:10:58.016 "uuid": "56c65c6b-6685-49c7-bd8f-92bef6f5468e", 00:10:58.016 "is_configured": true, 00:10:58.016 "data_offset": 2048, 00:10:58.016 "data_size": 63488 00:10:58.016 } 00:10:58.016 ] 00:10:58.016 }' 00:10:58.016 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.016 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.276 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:58.276 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.276 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.276 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.276 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.276 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:58.276 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:58.276 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.276 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.276 [2024-12-06 09:48:23.432613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:58.276 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.276 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:58.276 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.276 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.276 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.276 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.276 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.276 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.276 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.276 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.276 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.276 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.276 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.277 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.277 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.537 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.537 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.537 "name": "Existed_Raid", 00:10:58.537 "uuid": "7c4a14d9-6414-4761-8684-eb0e426046e6", 00:10:58.537 "strip_size_kb": 64, 00:10:58.537 "state": "configuring", 00:10:58.537 "raid_level": "concat", 00:10:58.537 "superblock": true, 00:10:58.537 "num_base_bdevs": 4, 00:10:58.537 "num_base_bdevs_discovered": 2, 00:10:58.537 "num_base_bdevs_operational": 4, 00:10:58.537 "base_bdevs_list": [ 00:10:58.537 { 00:10:58.537 "name": null, 00:10:58.537 "uuid": "7629936d-e5c2-445d-850a-7ad51369fde7", 00:10:58.537 "is_configured": false, 00:10:58.537 "data_offset": 0, 00:10:58.537 "data_size": 63488 00:10:58.537 }, 00:10:58.537 { 00:10:58.537 "name": null, 00:10:58.537 "uuid": "110446ff-263e-49a0-b1ed-6b8cd89e5f36", 00:10:58.537 "is_configured": false, 00:10:58.537 "data_offset": 0, 00:10:58.537 "data_size": 63488 00:10:58.537 }, 00:10:58.537 { 00:10:58.537 "name": "BaseBdev3", 00:10:58.537 "uuid": "ef263986-de73-46db-9e11-2731d468a980", 00:10:58.537 "is_configured": true, 00:10:58.537 "data_offset": 2048, 00:10:58.537 "data_size": 63488 00:10:58.537 }, 00:10:58.537 { 00:10:58.537 "name": "BaseBdev4", 00:10:58.537 "uuid": "56c65c6b-6685-49c7-bd8f-92bef6f5468e", 00:10:58.537 "is_configured": true, 00:10:58.537 "data_offset": 2048, 00:10:58.537 "data_size": 63488 00:10:58.537 } 00:10:58.537 ] 00:10:58.537 }' 00:10:58.537 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.537 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.797 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.797 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.797 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.797 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:58.797 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.797 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:58.797 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:58.797 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.797 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.797 [2024-12-06 09:48:24.050323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:58.797 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.797 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:58.797 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.797 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.797 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.797 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.797 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.797 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.797 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.797 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.797 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.797 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.797 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.797 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.797 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.057 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.057 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.057 "name": "Existed_Raid", 00:10:59.057 "uuid": "7c4a14d9-6414-4761-8684-eb0e426046e6", 00:10:59.057 "strip_size_kb": 64, 00:10:59.057 "state": "configuring", 00:10:59.057 "raid_level": "concat", 00:10:59.057 "superblock": true, 00:10:59.057 "num_base_bdevs": 4, 00:10:59.057 "num_base_bdevs_discovered": 3, 00:10:59.057 "num_base_bdevs_operational": 4, 00:10:59.057 "base_bdevs_list": [ 00:10:59.057 { 00:10:59.057 "name": null, 00:10:59.057 "uuid": "7629936d-e5c2-445d-850a-7ad51369fde7", 00:10:59.057 "is_configured": false, 00:10:59.057 "data_offset": 0, 00:10:59.057 "data_size": 63488 00:10:59.057 }, 00:10:59.057 { 00:10:59.057 "name": "BaseBdev2", 00:10:59.057 "uuid": "110446ff-263e-49a0-b1ed-6b8cd89e5f36", 00:10:59.057 "is_configured": true, 00:10:59.057 "data_offset": 2048, 00:10:59.057 "data_size": 63488 00:10:59.057 }, 00:10:59.057 { 00:10:59.057 "name": "BaseBdev3", 00:10:59.057 "uuid": "ef263986-de73-46db-9e11-2731d468a980", 00:10:59.057 "is_configured": true, 00:10:59.057 "data_offset": 2048, 00:10:59.057 "data_size": 63488 00:10:59.057 }, 00:10:59.057 { 00:10:59.057 "name": "BaseBdev4", 00:10:59.057 "uuid": "56c65c6b-6685-49c7-bd8f-92bef6f5468e", 00:10:59.057 "is_configured": true, 00:10:59.057 "data_offset": 2048, 00:10:59.057 "data_size": 63488 00:10:59.057 } 00:10:59.057 ] 00:10:59.057 }' 00:10:59.057 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.057 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.317 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:59.317 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.317 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.317 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.317 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.317 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:59.317 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.317 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.317 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.317 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:59.317 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.576 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7629936d-e5c2-445d-850a-7ad51369fde7 00:10:59.576 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.576 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.576 [2024-12-06 09:48:24.628532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:59.576 [2024-12-06 09:48:24.628864] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:59.576 [2024-12-06 09:48:24.628913] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:59.576 [2024-12-06 09:48:24.629218] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:59.576 [2024-12-06 09:48:24.629392] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:59.576 [2024-12-06 09:48:24.629433] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raNewBaseBdev 00:10:59.576 id_bdev 0x617000008200 00:10:59.576 [2024-12-06 09:48:24.629624] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.576 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.576 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:59.576 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:59.576 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:59.576 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:59.576 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:59.576 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:59.576 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:59.576 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.576 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.576 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.576 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:59.576 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.576 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.576 [ 00:10:59.576 { 00:10:59.576 "name": "NewBaseBdev", 00:10:59.576 "aliases": [ 00:10:59.576 "7629936d-e5c2-445d-850a-7ad51369fde7" 00:10:59.576 ], 00:10:59.576 "product_name": "Malloc disk", 00:10:59.576 "block_size": 512, 00:10:59.576 "num_blocks": 65536, 00:10:59.576 "uuid": "7629936d-e5c2-445d-850a-7ad51369fde7", 00:10:59.576 "assigned_rate_limits": { 00:10:59.576 "rw_ios_per_sec": 0, 00:10:59.576 "rw_mbytes_per_sec": 0, 00:10:59.576 "r_mbytes_per_sec": 0, 00:10:59.576 "w_mbytes_per_sec": 0 00:10:59.576 }, 00:10:59.576 "claimed": true, 00:10:59.576 "claim_type": "exclusive_write", 00:10:59.576 "zoned": false, 00:10:59.576 "supported_io_types": { 00:10:59.576 "read": true, 00:10:59.576 "write": true, 00:10:59.576 "unmap": true, 00:10:59.576 "flush": true, 00:10:59.576 "reset": true, 00:10:59.576 "nvme_admin": false, 00:10:59.576 "nvme_io": false, 00:10:59.576 "nvme_io_md": false, 00:10:59.576 "write_zeroes": true, 00:10:59.576 "zcopy": true, 00:10:59.576 "get_zone_info": false, 00:10:59.576 "zone_management": false, 00:10:59.576 "zone_append": false, 00:10:59.576 "compare": false, 00:10:59.576 "compare_and_write": false, 00:10:59.576 "abort": true, 00:10:59.576 "seek_hole": false, 00:10:59.576 "seek_data": false, 00:10:59.576 "copy": true, 00:10:59.576 "nvme_iov_md": false 00:10:59.576 }, 00:10:59.576 "memory_domains": [ 00:10:59.576 { 00:10:59.576 "dma_device_id": "system", 00:10:59.576 "dma_device_type": 1 00:10:59.576 }, 00:10:59.576 { 00:10:59.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.576 "dma_device_type": 2 00:10:59.576 } 00:10:59.576 ], 00:10:59.576 "driver_specific": {} 00:10:59.576 } 00:10:59.576 ] 00:10:59.576 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.576 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:59.576 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:59.576 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.576 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.576 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.576 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.576 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.576 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.576 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.576 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.576 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.576 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.577 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.577 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.577 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.577 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.577 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.577 "name": "Existed_Raid", 00:10:59.577 "uuid": "7c4a14d9-6414-4761-8684-eb0e426046e6", 00:10:59.577 "strip_size_kb": 64, 00:10:59.577 "state": "online", 00:10:59.577 "raid_level": "concat", 00:10:59.577 "superblock": true, 00:10:59.577 "num_base_bdevs": 4, 00:10:59.577 "num_base_bdevs_discovered": 4, 00:10:59.577 "num_base_bdevs_operational": 4, 00:10:59.577 "base_bdevs_list": [ 00:10:59.577 { 00:10:59.577 "name": "NewBaseBdev", 00:10:59.577 "uuid": "7629936d-e5c2-445d-850a-7ad51369fde7", 00:10:59.577 "is_configured": true, 00:10:59.577 "data_offset": 2048, 00:10:59.577 "data_size": 63488 00:10:59.577 }, 00:10:59.577 { 00:10:59.577 "name": "BaseBdev2", 00:10:59.577 "uuid": "110446ff-263e-49a0-b1ed-6b8cd89e5f36", 00:10:59.577 "is_configured": true, 00:10:59.577 "data_offset": 2048, 00:10:59.577 "data_size": 63488 00:10:59.577 }, 00:10:59.577 { 00:10:59.577 "name": "BaseBdev3", 00:10:59.577 "uuid": "ef263986-de73-46db-9e11-2731d468a980", 00:10:59.577 "is_configured": true, 00:10:59.577 "data_offset": 2048, 00:10:59.577 "data_size": 63488 00:10:59.577 }, 00:10:59.577 { 00:10:59.577 "name": "BaseBdev4", 00:10:59.577 "uuid": "56c65c6b-6685-49c7-bd8f-92bef6f5468e", 00:10:59.577 "is_configured": true, 00:10:59.577 "data_offset": 2048, 00:10:59.577 "data_size": 63488 00:10:59.577 } 00:10:59.577 ] 00:10:59.577 }' 00:10:59.577 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.577 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.836 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:59.836 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:59.836 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:59.836 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:59.836 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:59.836 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:59.836 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:59.836 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:59.836 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.836 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.836 [2024-12-06 09:48:25.080336] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:59.836 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.836 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:59.836 "name": "Existed_Raid", 00:10:59.836 "aliases": [ 00:10:59.836 "7c4a14d9-6414-4761-8684-eb0e426046e6" 00:10:59.836 ], 00:10:59.836 "product_name": "Raid Volume", 00:10:59.836 "block_size": 512, 00:10:59.836 "num_blocks": 253952, 00:10:59.836 "uuid": "7c4a14d9-6414-4761-8684-eb0e426046e6", 00:10:59.836 "assigned_rate_limits": { 00:10:59.836 "rw_ios_per_sec": 0, 00:10:59.836 "rw_mbytes_per_sec": 0, 00:10:59.836 "r_mbytes_per_sec": 0, 00:10:59.836 "w_mbytes_per_sec": 0 00:10:59.836 }, 00:10:59.836 "claimed": false, 00:10:59.836 "zoned": false, 00:10:59.836 "supported_io_types": { 00:10:59.836 "read": true, 00:10:59.836 "write": true, 00:10:59.836 "unmap": true, 00:10:59.836 "flush": true, 00:10:59.836 "reset": true, 00:10:59.836 "nvme_admin": false, 00:10:59.836 "nvme_io": false, 00:10:59.836 "nvme_io_md": false, 00:10:59.836 "write_zeroes": true, 00:10:59.836 "zcopy": false, 00:10:59.836 "get_zone_info": false, 00:10:59.836 "zone_management": false, 00:10:59.836 "zone_append": false, 00:10:59.836 "compare": false, 00:10:59.836 "compare_and_write": false, 00:10:59.836 "abort": false, 00:10:59.836 "seek_hole": false, 00:10:59.836 "seek_data": false, 00:10:59.836 "copy": false, 00:10:59.836 "nvme_iov_md": false 00:10:59.836 }, 00:10:59.836 "memory_domains": [ 00:10:59.836 { 00:10:59.836 "dma_device_id": "system", 00:10:59.836 "dma_device_type": 1 00:10:59.836 }, 00:10:59.836 { 00:10:59.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.836 "dma_device_type": 2 00:10:59.836 }, 00:10:59.836 { 00:10:59.836 "dma_device_id": "system", 00:10:59.836 "dma_device_type": 1 00:10:59.836 }, 00:10:59.836 { 00:10:59.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.836 "dma_device_type": 2 00:10:59.836 }, 00:10:59.836 { 00:10:59.836 "dma_device_id": "system", 00:10:59.836 "dma_device_type": 1 00:10:59.836 }, 00:10:59.836 { 00:10:59.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.836 "dma_device_type": 2 00:10:59.836 }, 00:10:59.836 { 00:10:59.836 "dma_device_id": "system", 00:10:59.836 "dma_device_type": 1 00:10:59.836 }, 00:10:59.836 { 00:10:59.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.836 "dma_device_type": 2 00:10:59.836 } 00:10:59.836 ], 00:10:59.836 "driver_specific": { 00:10:59.836 "raid": { 00:10:59.836 "uuid": "7c4a14d9-6414-4761-8684-eb0e426046e6", 00:10:59.836 "strip_size_kb": 64, 00:10:59.836 "state": "online", 00:10:59.836 "raid_level": "concat", 00:10:59.836 "superblock": true, 00:10:59.836 "num_base_bdevs": 4, 00:10:59.836 "num_base_bdevs_discovered": 4, 00:10:59.836 "num_base_bdevs_operational": 4, 00:10:59.836 "base_bdevs_list": [ 00:10:59.836 { 00:10:59.836 "name": "NewBaseBdev", 00:10:59.836 "uuid": "7629936d-e5c2-445d-850a-7ad51369fde7", 00:10:59.836 "is_configured": true, 00:10:59.836 "data_offset": 2048, 00:10:59.836 "data_size": 63488 00:10:59.836 }, 00:10:59.836 { 00:10:59.836 "name": "BaseBdev2", 00:10:59.836 "uuid": "110446ff-263e-49a0-b1ed-6b8cd89e5f36", 00:10:59.837 "is_configured": true, 00:10:59.837 "data_offset": 2048, 00:10:59.837 "data_size": 63488 00:10:59.837 }, 00:10:59.837 { 00:10:59.837 "name": "BaseBdev3", 00:10:59.837 "uuid": "ef263986-de73-46db-9e11-2731d468a980", 00:10:59.837 "is_configured": true, 00:10:59.837 "data_offset": 2048, 00:10:59.837 "data_size": 63488 00:10:59.837 }, 00:10:59.837 { 00:10:59.837 "name": "BaseBdev4", 00:10:59.837 "uuid": "56c65c6b-6685-49c7-bd8f-92bef6f5468e", 00:10:59.837 "is_configured": true, 00:10:59.837 "data_offset": 2048, 00:10:59.837 "data_size": 63488 00:10:59.837 } 00:10:59.837 ] 00:10:59.837 } 00:10:59.837 } 00:10:59.837 }' 00:11:00.096 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:00.096 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:00.096 BaseBdev2 00:11:00.096 BaseBdev3 00:11:00.096 BaseBdev4' 00:11:00.096 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.096 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:00.096 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.096 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:00.096 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.096 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.096 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.096 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.096 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.096 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.096 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.096 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.096 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:00.096 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.096 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.096 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.096 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.096 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.096 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.096 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:00.096 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.096 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.096 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.096 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.096 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.096 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.096 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:00.096 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:00.096 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:00.096 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.096 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.096 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.355 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:00.355 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:00.355 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:00.355 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.355 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.355 [2024-12-06 09:48:25.375412] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:00.355 [2024-12-06 09:48:25.375487] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:00.355 [2024-12-06 09:48:25.375590] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:00.356 [2024-12-06 09:48:25.375675] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:00.356 [2024-12-06 09:48:25.375719] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:00.356 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.356 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71884 00:11:00.356 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71884 ']' 00:11:00.356 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71884 00:11:00.356 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:00.356 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:00.356 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71884 00:11:00.356 killing process with pid 71884 00:11:00.356 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:00.356 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:00.356 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71884' 00:11:00.356 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71884 00:11:00.356 [2024-12-06 09:48:25.423076] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:00.356 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71884 00:11:00.615 [2024-12-06 09:48:25.819777] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:01.996 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:01.996 00:11:01.996 real 0m11.554s 00:11:01.996 user 0m18.436s 00:11:01.996 sys 0m2.001s 00:11:01.996 09:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.996 09:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.996 ************************************ 00:11:01.996 END TEST raid_state_function_test_sb 00:11:01.996 ************************************ 00:11:01.996 09:48:27 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:01.996 09:48:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:01.996 09:48:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.996 09:48:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:01.996 ************************************ 00:11:01.996 START TEST raid_superblock_test 00:11:01.996 ************************************ 00:11:01.996 09:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:11:01.996 09:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:01.996 09:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:01.996 09:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:01.997 09:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:01.997 09:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:01.997 09:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:01.997 09:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:01.997 09:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:01.997 09:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:01.997 09:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:01.997 09:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:01.997 09:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:01.997 09:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:01.997 09:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:01.997 09:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:01.997 09:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:01.997 09:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72565 00:11:01.997 09:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:01.997 09:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72565 00:11:01.997 09:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72565 ']' 00:11:01.997 09:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.997 09:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.997 09:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.997 09:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.997 09:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.997 [2024-12-06 09:48:27.106256] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:11:01.997 [2024-12-06 09:48:27.106463] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72565 ] 00:11:02.257 [2024-12-06 09:48:27.283193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.257 [2024-12-06 09:48:27.396801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.517 [2024-12-06 09:48:27.598109] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:02.517 [2024-12-06 09:48:27.598177] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:02.776 09:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.776 09:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:02.776 09:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:02.776 09:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:02.776 09:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:02.776 09:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:02.776 09:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:02.776 09:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:02.776 09:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:02.776 09:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:02.776 09:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:02.776 09:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.776 09:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.776 malloc1 00:11:02.776 09:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.776 09:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:02.776 09:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.776 09:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.776 [2024-12-06 09:48:27.999761] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:02.776 [2024-12-06 09:48:27.999831] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.776 [2024-12-06 09:48:27.999853] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:02.776 [2024-12-06 09:48:27.999862] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.776 [2024-12-06 09:48:28.002021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.776 [2024-12-06 09:48:28.002060] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:02.776 pt1 00:11:02.776 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.776 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:02.776 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:02.776 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:02.776 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:02.776 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:02.776 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:02.776 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:02.776 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:02.776 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:02.776 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.776 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.036 malloc2 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.036 [2024-12-06 09:48:28.055037] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:03.036 [2024-12-06 09:48:28.055153] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.036 [2024-12-06 09:48:28.055199] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:03.036 [2024-12-06 09:48:28.055231] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.036 [2024-12-06 09:48:28.057464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.036 [2024-12-06 09:48:28.057535] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:03.036 pt2 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.036 malloc3 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.036 [2024-12-06 09:48:28.123660] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:03.036 [2024-12-06 09:48:28.123758] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.036 [2024-12-06 09:48:28.123820] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:03.036 [2024-12-06 09:48:28.123849] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.036 [2024-12-06 09:48:28.126110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.036 [2024-12-06 09:48:28.126207] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:03.036 pt3 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.036 malloc4 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.036 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.036 [2024-12-06 09:48:28.175992] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:03.036 [2024-12-06 09:48:28.176116] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.036 [2024-12-06 09:48:28.176175] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:03.037 [2024-12-06 09:48:28.176211] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.037 [2024-12-06 09:48:28.178412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.037 [2024-12-06 09:48:28.178487] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:03.037 pt4 00:11:03.037 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.037 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:03.037 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:03.037 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:03.037 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.037 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.037 [2024-12-06 09:48:28.188006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:03.037 [2024-12-06 09:48:28.189898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:03.037 [2024-12-06 09:48:28.190028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:03.037 [2024-12-06 09:48:28.190102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:03.037 [2024-12-06 09:48:28.190332] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:03.037 [2024-12-06 09:48:28.190380] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:03.037 [2024-12-06 09:48:28.190643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:03.037 [2024-12-06 09:48:28.190844] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:03.037 [2024-12-06 09:48:28.190892] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:03.037 [2024-12-06 09:48:28.191082] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.037 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.037 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:03.037 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:03.037 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.037 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:03.037 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.037 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.037 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.037 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.037 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.037 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.037 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.037 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.037 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.037 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.037 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.037 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.037 "name": "raid_bdev1", 00:11:03.037 "uuid": "958ecc0e-f517-4f8f-b993-344aea3aba31", 00:11:03.037 "strip_size_kb": 64, 00:11:03.037 "state": "online", 00:11:03.037 "raid_level": "concat", 00:11:03.037 "superblock": true, 00:11:03.037 "num_base_bdevs": 4, 00:11:03.037 "num_base_bdevs_discovered": 4, 00:11:03.037 "num_base_bdevs_operational": 4, 00:11:03.037 "base_bdevs_list": [ 00:11:03.037 { 00:11:03.037 "name": "pt1", 00:11:03.037 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:03.037 "is_configured": true, 00:11:03.037 "data_offset": 2048, 00:11:03.037 "data_size": 63488 00:11:03.037 }, 00:11:03.037 { 00:11:03.037 "name": "pt2", 00:11:03.037 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:03.037 "is_configured": true, 00:11:03.037 "data_offset": 2048, 00:11:03.037 "data_size": 63488 00:11:03.037 }, 00:11:03.037 { 00:11:03.037 "name": "pt3", 00:11:03.037 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:03.037 "is_configured": true, 00:11:03.037 "data_offset": 2048, 00:11:03.037 "data_size": 63488 00:11:03.037 }, 00:11:03.037 { 00:11:03.037 "name": "pt4", 00:11:03.037 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:03.037 "is_configured": true, 00:11:03.037 "data_offset": 2048, 00:11:03.037 "data_size": 63488 00:11:03.037 } 00:11:03.037 ] 00:11:03.037 }' 00:11:03.037 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.037 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.605 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:03.605 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:03.605 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:03.605 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:03.605 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:03.605 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:03.605 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:03.605 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.605 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.605 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:03.605 [2024-12-06 09:48:28.663522] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:03.605 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.605 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:03.605 "name": "raid_bdev1", 00:11:03.605 "aliases": [ 00:11:03.605 "958ecc0e-f517-4f8f-b993-344aea3aba31" 00:11:03.605 ], 00:11:03.605 "product_name": "Raid Volume", 00:11:03.605 "block_size": 512, 00:11:03.605 "num_blocks": 253952, 00:11:03.605 "uuid": "958ecc0e-f517-4f8f-b993-344aea3aba31", 00:11:03.605 "assigned_rate_limits": { 00:11:03.605 "rw_ios_per_sec": 0, 00:11:03.605 "rw_mbytes_per_sec": 0, 00:11:03.605 "r_mbytes_per_sec": 0, 00:11:03.605 "w_mbytes_per_sec": 0 00:11:03.605 }, 00:11:03.605 "claimed": false, 00:11:03.605 "zoned": false, 00:11:03.605 "supported_io_types": { 00:11:03.605 "read": true, 00:11:03.605 "write": true, 00:11:03.605 "unmap": true, 00:11:03.605 "flush": true, 00:11:03.605 "reset": true, 00:11:03.605 "nvme_admin": false, 00:11:03.605 "nvme_io": false, 00:11:03.605 "nvme_io_md": false, 00:11:03.605 "write_zeroes": true, 00:11:03.605 "zcopy": false, 00:11:03.605 "get_zone_info": false, 00:11:03.605 "zone_management": false, 00:11:03.605 "zone_append": false, 00:11:03.605 "compare": false, 00:11:03.605 "compare_and_write": false, 00:11:03.605 "abort": false, 00:11:03.605 "seek_hole": false, 00:11:03.605 "seek_data": false, 00:11:03.605 "copy": false, 00:11:03.605 "nvme_iov_md": false 00:11:03.605 }, 00:11:03.605 "memory_domains": [ 00:11:03.605 { 00:11:03.605 "dma_device_id": "system", 00:11:03.605 "dma_device_type": 1 00:11:03.605 }, 00:11:03.605 { 00:11:03.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.605 "dma_device_type": 2 00:11:03.605 }, 00:11:03.605 { 00:11:03.605 "dma_device_id": "system", 00:11:03.605 "dma_device_type": 1 00:11:03.605 }, 00:11:03.605 { 00:11:03.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.605 "dma_device_type": 2 00:11:03.605 }, 00:11:03.605 { 00:11:03.605 "dma_device_id": "system", 00:11:03.605 "dma_device_type": 1 00:11:03.605 }, 00:11:03.605 { 00:11:03.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.605 "dma_device_type": 2 00:11:03.605 }, 00:11:03.605 { 00:11:03.605 "dma_device_id": "system", 00:11:03.605 "dma_device_type": 1 00:11:03.605 }, 00:11:03.605 { 00:11:03.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.605 "dma_device_type": 2 00:11:03.605 } 00:11:03.605 ], 00:11:03.605 "driver_specific": { 00:11:03.605 "raid": { 00:11:03.605 "uuid": "958ecc0e-f517-4f8f-b993-344aea3aba31", 00:11:03.605 "strip_size_kb": 64, 00:11:03.605 "state": "online", 00:11:03.605 "raid_level": "concat", 00:11:03.605 "superblock": true, 00:11:03.605 "num_base_bdevs": 4, 00:11:03.605 "num_base_bdevs_discovered": 4, 00:11:03.605 "num_base_bdevs_operational": 4, 00:11:03.605 "base_bdevs_list": [ 00:11:03.605 { 00:11:03.605 "name": "pt1", 00:11:03.605 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:03.605 "is_configured": true, 00:11:03.605 "data_offset": 2048, 00:11:03.605 "data_size": 63488 00:11:03.605 }, 00:11:03.605 { 00:11:03.605 "name": "pt2", 00:11:03.605 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:03.605 "is_configured": true, 00:11:03.605 "data_offset": 2048, 00:11:03.605 "data_size": 63488 00:11:03.605 }, 00:11:03.605 { 00:11:03.605 "name": "pt3", 00:11:03.605 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:03.605 "is_configured": true, 00:11:03.605 "data_offset": 2048, 00:11:03.605 "data_size": 63488 00:11:03.605 }, 00:11:03.605 { 00:11:03.605 "name": "pt4", 00:11:03.605 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:03.605 "is_configured": true, 00:11:03.605 "data_offset": 2048, 00:11:03.605 "data_size": 63488 00:11:03.605 } 00:11:03.605 ] 00:11:03.605 } 00:11:03.605 } 00:11:03.605 }' 00:11:03.605 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:03.605 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:03.605 pt2 00:11:03.605 pt3 00:11:03.605 pt4' 00:11:03.605 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.605 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:03.605 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.605 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.605 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:03.605 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.605 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.605 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.605 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.605 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.605 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.605 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:03.605 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.605 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.605 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.605 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.863 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.863 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.863 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.864 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.864 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:03.864 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.864 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.864 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.864 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.864 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.864 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.864 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:03.864 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.864 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.864 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.864 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.864 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.864 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.864 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:03.864 09:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:03.864 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.864 09:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.864 [2024-12-06 09:48:28.990917] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:03.864 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.864 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=958ecc0e-f517-4f8f-b993-344aea3aba31 00:11:03.864 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 958ecc0e-f517-4f8f-b993-344aea3aba31 ']' 00:11:03.864 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:03.864 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.864 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.864 [2024-12-06 09:48:29.034524] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:03.864 [2024-12-06 09:48:29.034548] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:03.864 [2024-12-06 09:48:29.034621] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:03.864 [2024-12-06 09:48:29.034689] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:03.864 [2024-12-06 09:48:29.034702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:03.864 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.864 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.864 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.864 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.864 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:03.864 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.864 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:03.864 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:03.864 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:03.864 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:03.864 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.864 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.864 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.864 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:03.864 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:03.864 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.864 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.864 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.864 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:03.864 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:03.864 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.864 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.864 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.864 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:03.864 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:03.864 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.864 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.123 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.123 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:04.123 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.123 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:04.123 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.123 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.123 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:04.123 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:04.123 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:04.123 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:04.123 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:04.123 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.124 [2024-12-06 09:48:29.202279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:04.124 [2024-12-06 09:48:29.204080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:04.124 [2024-12-06 09:48:29.204132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:04.124 [2024-12-06 09:48:29.204176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:04.124 [2024-12-06 09:48:29.204238] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:04.124 [2024-12-06 09:48:29.204290] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:04.124 [2024-12-06 09:48:29.204309] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:04.124 [2024-12-06 09:48:29.204328] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:04.124 [2024-12-06 09:48:29.204341] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:04.124 [2024-12-06 09:48:29.204352] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:04.124 request: 00:11:04.124 { 00:11:04.124 "name": "raid_bdev1", 00:11:04.124 "raid_level": "concat", 00:11:04.124 "base_bdevs": [ 00:11:04.124 "malloc1", 00:11:04.124 "malloc2", 00:11:04.124 "malloc3", 00:11:04.124 "malloc4" 00:11:04.124 ], 00:11:04.124 "strip_size_kb": 64, 00:11:04.124 "superblock": false, 00:11:04.124 "method": "bdev_raid_create", 00:11:04.124 "req_id": 1 00:11:04.124 } 00:11:04.124 Got JSON-RPC error response 00:11:04.124 response: 00:11:04.124 { 00:11:04.124 "code": -17, 00:11:04.124 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:04.124 } 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.124 [2024-12-06 09:48:29.270169] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:04.124 [2024-12-06 09:48:29.270293] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.124 [2024-12-06 09:48:29.270331] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:04.124 [2024-12-06 09:48:29.270363] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.124 [2024-12-06 09:48:29.272553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.124 [2024-12-06 09:48:29.272629] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:04.124 [2024-12-06 09:48:29.272742] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:04.124 [2024-12-06 09:48:29.272834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:04.124 pt1 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.124 "name": "raid_bdev1", 00:11:04.124 "uuid": "958ecc0e-f517-4f8f-b993-344aea3aba31", 00:11:04.124 "strip_size_kb": 64, 00:11:04.124 "state": "configuring", 00:11:04.124 "raid_level": "concat", 00:11:04.124 "superblock": true, 00:11:04.124 "num_base_bdevs": 4, 00:11:04.124 "num_base_bdevs_discovered": 1, 00:11:04.124 "num_base_bdevs_operational": 4, 00:11:04.124 "base_bdevs_list": [ 00:11:04.124 { 00:11:04.124 "name": "pt1", 00:11:04.124 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:04.124 "is_configured": true, 00:11:04.124 "data_offset": 2048, 00:11:04.124 "data_size": 63488 00:11:04.124 }, 00:11:04.124 { 00:11:04.124 "name": null, 00:11:04.124 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:04.124 "is_configured": false, 00:11:04.124 "data_offset": 2048, 00:11:04.124 "data_size": 63488 00:11:04.124 }, 00:11:04.124 { 00:11:04.124 "name": null, 00:11:04.124 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:04.124 "is_configured": false, 00:11:04.124 "data_offset": 2048, 00:11:04.124 "data_size": 63488 00:11:04.124 }, 00:11:04.124 { 00:11:04.124 "name": null, 00:11:04.124 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:04.124 "is_configured": false, 00:11:04.124 "data_offset": 2048, 00:11:04.124 "data_size": 63488 00:11:04.124 } 00:11:04.124 ] 00:11:04.124 }' 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.124 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.693 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:04.693 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:04.693 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.693 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.693 [2024-12-06 09:48:29.689428] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:04.693 [2024-12-06 09:48:29.689504] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.693 [2024-12-06 09:48:29.689525] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:04.693 [2024-12-06 09:48:29.689537] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.693 [2024-12-06 09:48:29.689970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.693 [2024-12-06 09:48:29.689989] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:04.693 [2024-12-06 09:48:29.690072] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:04.693 [2024-12-06 09:48:29.690094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:04.693 pt2 00:11:04.693 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.693 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:04.693 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.693 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.693 [2024-12-06 09:48:29.701439] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:04.693 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.693 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:04.693 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.693 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.693 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.693 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.693 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.693 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.693 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.693 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.693 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.693 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.693 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.693 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.693 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.693 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.693 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.693 "name": "raid_bdev1", 00:11:04.693 "uuid": "958ecc0e-f517-4f8f-b993-344aea3aba31", 00:11:04.693 "strip_size_kb": 64, 00:11:04.693 "state": "configuring", 00:11:04.693 "raid_level": "concat", 00:11:04.693 "superblock": true, 00:11:04.693 "num_base_bdevs": 4, 00:11:04.693 "num_base_bdevs_discovered": 1, 00:11:04.693 "num_base_bdevs_operational": 4, 00:11:04.693 "base_bdevs_list": [ 00:11:04.693 { 00:11:04.693 "name": "pt1", 00:11:04.693 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:04.693 "is_configured": true, 00:11:04.693 "data_offset": 2048, 00:11:04.693 "data_size": 63488 00:11:04.693 }, 00:11:04.693 { 00:11:04.693 "name": null, 00:11:04.693 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:04.693 "is_configured": false, 00:11:04.693 "data_offset": 0, 00:11:04.693 "data_size": 63488 00:11:04.693 }, 00:11:04.693 { 00:11:04.693 "name": null, 00:11:04.693 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:04.693 "is_configured": false, 00:11:04.693 "data_offset": 2048, 00:11:04.693 "data_size": 63488 00:11:04.693 }, 00:11:04.693 { 00:11:04.693 "name": null, 00:11:04.693 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:04.693 "is_configured": false, 00:11:04.693 "data_offset": 2048, 00:11:04.693 "data_size": 63488 00:11:04.693 } 00:11:04.693 ] 00:11:04.693 }' 00:11:04.693 09:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.693 09:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.952 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:04.952 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:04.952 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:04.952 09:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.952 09:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.953 [2024-12-06 09:48:30.148663] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:04.953 [2024-12-06 09:48:30.148789] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.953 [2024-12-06 09:48:30.148831] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:04.953 [2024-12-06 09:48:30.148873] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.953 [2024-12-06 09:48:30.149405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.953 [2024-12-06 09:48:30.149467] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:04.953 [2024-12-06 09:48:30.149586] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:04.953 [2024-12-06 09:48:30.149640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:04.953 pt2 00:11:04.953 09:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.953 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:04.953 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:04.953 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:04.953 09:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.953 09:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.953 [2024-12-06 09:48:30.160608] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:04.953 [2024-12-06 09:48:30.160699] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.953 [2024-12-06 09:48:30.160760] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:04.953 [2024-12-06 09:48:30.160792] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.953 [2024-12-06 09:48:30.161254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.953 [2024-12-06 09:48:30.161315] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:04.953 [2024-12-06 09:48:30.161414] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:04.953 [2024-12-06 09:48:30.161475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:04.953 pt3 00:11:04.953 09:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.953 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:04.953 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:04.953 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:04.953 09:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.953 09:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.953 [2024-12-06 09:48:30.172562] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:04.953 [2024-12-06 09:48:30.172607] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.953 [2024-12-06 09:48:30.172623] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:04.953 [2024-12-06 09:48:30.172632] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.953 [2024-12-06 09:48:30.172996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.953 [2024-12-06 09:48:30.173012] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:04.953 [2024-12-06 09:48:30.173072] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:04.953 [2024-12-06 09:48:30.173093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:04.953 [2024-12-06 09:48:30.173253] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:04.953 [2024-12-06 09:48:30.173263] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:04.953 [2024-12-06 09:48:30.173529] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:04.953 [2024-12-06 09:48:30.173684] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:04.953 [2024-12-06 09:48:30.173703] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:04.953 [2024-12-06 09:48:30.173822] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.953 pt4 00:11:04.953 09:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.953 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:04.953 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:04.953 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:04.953 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.953 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.953 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.953 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.953 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.953 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.953 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.953 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.953 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.953 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.953 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.953 09:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.953 09:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.953 09:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.212 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.212 "name": "raid_bdev1", 00:11:05.212 "uuid": "958ecc0e-f517-4f8f-b993-344aea3aba31", 00:11:05.212 "strip_size_kb": 64, 00:11:05.212 "state": "online", 00:11:05.212 "raid_level": "concat", 00:11:05.212 "superblock": true, 00:11:05.212 "num_base_bdevs": 4, 00:11:05.212 "num_base_bdevs_discovered": 4, 00:11:05.212 "num_base_bdevs_operational": 4, 00:11:05.212 "base_bdevs_list": [ 00:11:05.212 { 00:11:05.212 "name": "pt1", 00:11:05.212 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:05.212 "is_configured": true, 00:11:05.212 "data_offset": 2048, 00:11:05.212 "data_size": 63488 00:11:05.212 }, 00:11:05.212 { 00:11:05.212 "name": "pt2", 00:11:05.212 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:05.212 "is_configured": true, 00:11:05.212 "data_offset": 2048, 00:11:05.212 "data_size": 63488 00:11:05.212 }, 00:11:05.212 { 00:11:05.212 "name": "pt3", 00:11:05.212 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:05.212 "is_configured": true, 00:11:05.212 "data_offset": 2048, 00:11:05.212 "data_size": 63488 00:11:05.212 }, 00:11:05.212 { 00:11:05.212 "name": "pt4", 00:11:05.213 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:05.213 "is_configured": true, 00:11:05.213 "data_offset": 2048, 00:11:05.213 "data_size": 63488 00:11:05.213 } 00:11:05.213 ] 00:11:05.213 }' 00:11:05.213 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.213 09:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.472 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:05.472 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:05.472 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:05.472 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:05.472 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:05.472 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:05.472 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:05.472 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:05.472 09:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.472 09:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.472 [2024-12-06 09:48:30.640132] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:05.472 09:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.472 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:05.472 "name": "raid_bdev1", 00:11:05.472 "aliases": [ 00:11:05.472 "958ecc0e-f517-4f8f-b993-344aea3aba31" 00:11:05.472 ], 00:11:05.472 "product_name": "Raid Volume", 00:11:05.472 "block_size": 512, 00:11:05.472 "num_blocks": 253952, 00:11:05.472 "uuid": "958ecc0e-f517-4f8f-b993-344aea3aba31", 00:11:05.472 "assigned_rate_limits": { 00:11:05.472 "rw_ios_per_sec": 0, 00:11:05.472 "rw_mbytes_per_sec": 0, 00:11:05.472 "r_mbytes_per_sec": 0, 00:11:05.472 "w_mbytes_per_sec": 0 00:11:05.472 }, 00:11:05.472 "claimed": false, 00:11:05.472 "zoned": false, 00:11:05.472 "supported_io_types": { 00:11:05.472 "read": true, 00:11:05.472 "write": true, 00:11:05.472 "unmap": true, 00:11:05.472 "flush": true, 00:11:05.472 "reset": true, 00:11:05.472 "nvme_admin": false, 00:11:05.472 "nvme_io": false, 00:11:05.472 "nvme_io_md": false, 00:11:05.472 "write_zeroes": true, 00:11:05.472 "zcopy": false, 00:11:05.472 "get_zone_info": false, 00:11:05.472 "zone_management": false, 00:11:05.472 "zone_append": false, 00:11:05.472 "compare": false, 00:11:05.472 "compare_and_write": false, 00:11:05.472 "abort": false, 00:11:05.472 "seek_hole": false, 00:11:05.472 "seek_data": false, 00:11:05.472 "copy": false, 00:11:05.472 "nvme_iov_md": false 00:11:05.472 }, 00:11:05.472 "memory_domains": [ 00:11:05.472 { 00:11:05.472 "dma_device_id": "system", 00:11:05.472 "dma_device_type": 1 00:11:05.472 }, 00:11:05.472 { 00:11:05.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.472 "dma_device_type": 2 00:11:05.472 }, 00:11:05.472 { 00:11:05.472 "dma_device_id": "system", 00:11:05.472 "dma_device_type": 1 00:11:05.472 }, 00:11:05.472 { 00:11:05.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.472 "dma_device_type": 2 00:11:05.472 }, 00:11:05.472 { 00:11:05.472 "dma_device_id": "system", 00:11:05.472 "dma_device_type": 1 00:11:05.472 }, 00:11:05.472 { 00:11:05.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.472 "dma_device_type": 2 00:11:05.472 }, 00:11:05.472 { 00:11:05.472 "dma_device_id": "system", 00:11:05.472 "dma_device_type": 1 00:11:05.472 }, 00:11:05.472 { 00:11:05.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.472 "dma_device_type": 2 00:11:05.472 } 00:11:05.472 ], 00:11:05.472 "driver_specific": { 00:11:05.472 "raid": { 00:11:05.472 "uuid": "958ecc0e-f517-4f8f-b993-344aea3aba31", 00:11:05.472 "strip_size_kb": 64, 00:11:05.472 "state": "online", 00:11:05.472 "raid_level": "concat", 00:11:05.472 "superblock": true, 00:11:05.472 "num_base_bdevs": 4, 00:11:05.472 "num_base_bdevs_discovered": 4, 00:11:05.472 "num_base_bdevs_operational": 4, 00:11:05.472 "base_bdevs_list": [ 00:11:05.472 { 00:11:05.472 "name": "pt1", 00:11:05.472 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:05.472 "is_configured": true, 00:11:05.473 "data_offset": 2048, 00:11:05.473 "data_size": 63488 00:11:05.473 }, 00:11:05.473 { 00:11:05.473 "name": "pt2", 00:11:05.473 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:05.473 "is_configured": true, 00:11:05.473 "data_offset": 2048, 00:11:05.473 "data_size": 63488 00:11:05.473 }, 00:11:05.473 { 00:11:05.473 "name": "pt3", 00:11:05.473 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:05.473 "is_configured": true, 00:11:05.473 "data_offset": 2048, 00:11:05.473 "data_size": 63488 00:11:05.473 }, 00:11:05.473 { 00:11:05.473 "name": "pt4", 00:11:05.473 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:05.473 "is_configured": true, 00:11:05.473 "data_offset": 2048, 00:11:05.473 "data_size": 63488 00:11:05.473 } 00:11:05.473 ] 00:11:05.473 } 00:11:05.473 } 00:11:05.473 }' 00:11:05.473 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:05.473 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:05.473 pt2 00:11:05.473 pt3 00:11:05.473 pt4' 00:11:05.473 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.473 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:05.473 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.731 [2024-12-06 09:48:30.951587] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 958ecc0e-f517-4f8f-b993-344aea3aba31 '!=' 958ecc0e-f517-4f8f-b993-344aea3aba31 ']' 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72565 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72565 ']' 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72565 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:05.731 09:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:05.992 09:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72565 00:11:05.992 09:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:05.992 09:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:05.992 killing process with pid 72565 00:11:05.992 09:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72565' 00:11:05.992 09:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72565 00:11:05.992 [2024-12-06 09:48:31.035288] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:05.992 [2024-12-06 09:48:31.035375] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:05.992 [2024-12-06 09:48:31.035451] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:05.992 [2024-12-06 09:48:31.035460] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:05.992 09:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72565 00:11:06.251 [2024-12-06 09:48:31.429847] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:07.631 09:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:07.631 00:11:07.631 real 0m5.513s 00:11:07.631 user 0m7.963s 00:11:07.631 sys 0m0.923s 00:11:07.631 09:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.631 09:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.631 ************************************ 00:11:07.631 END TEST raid_superblock_test 00:11:07.631 ************************************ 00:11:07.631 09:48:32 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:07.631 09:48:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:07.631 09:48:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:07.631 09:48:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:07.631 ************************************ 00:11:07.631 START TEST raid_read_error_test 00:11:07.631 ************************************ 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ClMHPJhJFt 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72824 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72824 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72824 ']' 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.631 09:48:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.631 [2024-12-06 09:48:32.710689] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:11:07.631 [2024-12-06 09:48:32.710887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72824 ] 00:11:07.631 [2024-12-06 09:48:32.881667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.891 [2024-12-06 09:48:32.995083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.151 [2024-12-06 09:48:33.191337] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.151 [2024-12-06 09:48:33.191494] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.412 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.412 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:08.412 09:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:08.412 09:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:08.412 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.412 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.412 BaseBdev1_malloc 00:11:08.412 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.412 09:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:08.412 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.412 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.412 true 00:11:08.412 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.412 09:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:08.412 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.412 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.412 [2024-12-06 09:48:33.595181] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:08.412 [2024-12-06 09:48:33.595237] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.412 [2024-12-06 09:48:33.595256] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:08.412 [2024-12-06 09:48:33.595266] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.412 [2024-12-06 09:48:33.597319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.412 [2024-12-06 09:48:33.597412] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:08.412 BaseBdev1 00:11:08.412 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.412 09:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:08.412 09:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:08.412 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.412 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.412 BaseBdev2_malloc 00:11:08.412 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.412 09:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:08.412 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.412 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.412 true 00:11:08.412 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.412 09:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:08.412 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.412 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.412 [2024-12-06 09:48:33.661521] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:08.412 [2024-12-06 09:48:33.661584] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.412 [2024-12-06 09:48:33.661621] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:08.412 [2024-12-06 09:48:33.661632] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.412 [2024-12-06 09:48:33.663734] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.412 [2024-12-06 09:48:33.663770] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:08.412 BaseBdev2 00:11:08.412 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.412 09:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:08.412 09:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:08.413 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.413 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.673 BaseBdev3_malloc 00:11:08.673 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.673 09:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:08.673 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.673 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.673 true 00:11:08.673 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.673 09:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:08.673 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.673 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.673 [2024-12-06 09:48:33.736938] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:08.673 [2024-12-06 09:48:33.737073] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.673 [2024-12-06 09:48:33.737114] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:08.673 [2024-12-06 09:48:33.737125] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.673 [2024-12-06 09:48:33.739243] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.673 [2024-12-06 09:48:33.739287] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:08.673 BaseBdev3 00:11:08.673 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.673 09:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:08.673 09:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:08.673 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.673 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.673 BaseBdev4_malloc 00:11:08.673 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.673 09:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:08.673 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.673 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.673 true 00:11:08.673 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.673 09:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:08.673 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.673 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.673 [2024-12-06 09:48:33.800780] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:08.673 [2024-12-06 09:48:33.800835] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.673 [2024-12-06 09:48:33.800853] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:08.673 [2024-12-06 09:48:33.800862] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.673 [2024-12-06 09:48:33.802938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.673 [2024-12-06 09:48:33.803017] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:08.673 BaseBdev4 00:11:08.673 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.673 09:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:08.673 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.673 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.673 [2024-12-06 09:48:33.812820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:08.673 [2024-12-06 09:48:33.814705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:08.673 [2024-12-06 09:48:33.814825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:08.673 [2024-12-06 09:48:33.814919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:08.673 [2024-12-06 09:48:33.815188] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:08.673 [2024-12-06 09:48:33.815241] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:08.674 [2024-12-06 09:48:33.815504] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:08.674 [2024-12-06 09:48:33.815703] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:08.674 [2024-12-06 09:48:33.815747] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:08.674 [2024-12-06 09:48:33.815943] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.674 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.674 09:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:08.674 09:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.674 09:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.674 09:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:08.674 09:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.674 09:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.674 09:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.674 09:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.674 09:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.674 09:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.674 09:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.674 09:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.674 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.674 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.674 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.674 09:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.674 "name": "raid_bdev1", 00:11:08.674 "uuid": "4c99038c-883c-452a-b6ff-b44a0f185f42", 00:11:08.674 "strip_size_kb": 64, 00:11:08.674 "state": "online", 00:11:08.674 "raid_level": "concat", 00:11:08.674 "superblock": true, 00:11:08.674 "num_base_bdevs": 4, 00:11:08.674 "num_base_bdevs_discovered": 4, 00:11:08.674 "num_base_bdevs_operational": 4, 00:11:08.674 "base_bdevs_list": [ 00:11:08.674 { 00:11:08.674 "name": "BaseBdev1", 00:11:08.674 "uuid": "19beaa84-9c7b-58da-bdd6-7dfb7464a24c", 00:11:08.674 "is_configured": true, 00:11:08.674 "data_offset": 2048, 00:11:08.674 "data_size": 63488 00:11:08.674 }, 00:11:08.674 { 00:11:08.674 "name": "BaseBdev2", 00:11:08.674 "uuid": "039a2f0d-a869-599d-a2b9-a4f5aadccc60", 00:11:08.674 "is_configured": true, 00:11:08.674 "data_offset": 2048, 00:11:08.674 "data_size": 63488 00:11:08.674 }, 00:11:08.674 { 00:11:08.674 "name": "BaseBdev3", 00:11:08.674 "uuid": "5f269c3a-b5ff-5516-88d9-c6f018f1d2dc", 00:11:08.674 "is_configured": true, 00:11:08.674 "data_offset": 2048, 00:11:08.674 "data_size": 63488 00:11:08.674 }, 00:11:08.674 { 00:11:08.674 "name": "BaseBdev4", 00:11:08.674 "uuid": "8cbc2cb6-1e45-5087-a742-f8105faa816f", 00:11:08.674 "is_configured": true, 00:11:08.674 "data_offset": 2048, 00:11:08.674 "data_size": 63488 00:11:08.674 } 00:11:08.674 ] 00:11:08.674 }' 00:11:08.674 09:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.674 09:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.934 09:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:08.934 09:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:09.193 [2024-12-06 09:48:34.281213] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:10.132 09:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:10.132 09:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.132 09:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.132 09:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.132 09:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:10.132 09:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:10.132 09:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:10.132 09:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:10.132 09:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.132 09:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.132 09:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.132 09:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.132 09:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.132 09:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.132 09:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.132 09:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.132 09:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.132 09:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.132 09:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.132 09:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.132 09:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.132 09:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.132 09:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.132 "name": "raid_bdev1", 00:11:10.132 "uuid": "4c99038c-883c-452a-b6ff-b44a0f185f42", 00:11:10.132 "strip_size_kb": 64, 00:11:10.132 "state": "online", 00:11:10.132 "raid_level": "concat", 00:11:10.132 "superblock": true, 00:11:10.132 "num_base_bdevs": 4, 00:11:10.132 "num_base_bdevs_discovered": 4, 00:11:10.132 "num_base_bdevs_operational": 4, 00:11:10.132 "base_bdevs_list": [ 00:11:10.132 { 00:11:10.132 "name": "BaseBdev1", 00:11:10.132 "uuid": "19beaa84-9c7b-58da-bdd6-7dfb7464a24c", 00:11:10.132 "is_configured": true, 00:11:10.132 "data_offset": 2048, 00:11:10.132 "data_size": 63488 00:11:10.132 }, 00:11:10.132 { 00:11:10.132 "name": "BaseBdev2", 00:11:10.132 "uuid": "039a2f0d-a869-599d-a2b9-a4f5aadccc60", 00:11:10.132 "is_configured": true, 00:11:10.132 "data_offset": 2048, 00:11:10.132 "data_size": 63488 00:11:10.132 }, 00:11:10.132 { 00:11:10.132 "name": "BaseBdev3", 00:11:10.132 "uuid": "5f269c3a-b5ff-5516-88d9-c6f018f1d2dc", 00:11:10.132 "is_configured": true, 00:11:10.132 "data_offset": 2048, 00:11:10.132 "data_size": 63488 00:11:10.132 }, 00:11:10.132 { 00:11:10.132 "name": "BaseBdev4", 00:11:10.132 "uuid": "8cbc2cb6-1e45-5087-a742-f8105faa816f", 00:11:10.132 "is_configured": true, 00:11:10.132 "data_offset": 2048, 00:11:10.132 "data_size": 63488 00:11:10.132 } 00:11:10.132 ] 00:11:10.132 }' 00:11:10.132 09:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.132 09:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.701 09:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:10.701 09:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.701 09:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.701 [2024-12-06 09:48:35.685476] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:10.701 [2024-12-06 09:48:35.685568] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:10.701 [2024-12-06 09:48:35.688459] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:10.701 [2024-12-06 09:48:35.688560] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.701 [2024-12-06 09:48:35.688631] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:10.701 [2024-12-06 09:48:35.688675] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:10.701 { 00:11:10.701 "results": [ 00:11:10.701 { 00:11:10.701 "job": "raid_bdev1", 00:11:10.701 "core_mask": "0x1", 00:11:10.701 "workload": "randrw", 00:11:10.701 "percentage": 50, 00:11:10.701 "status": "finished", 00:11:10.701 "queue_depth": 1, 00:11:10.701 "io_size": 131072, 00:11:10.701 "runtime": 1.405344, 00:11:10.701 "iops": 15538.544299474008, 00:11:10.701 "mibps": 1942.318037434251, 00:11:10.701 "io_failed": 1, 00:11:10.701 "io_timeout": 0, 00:11:10.701 "avg_latency_us": 89.31514642758447, 00:11:10.701 "min_latency_us": 26.494323144104804, 00:11:10.701 "max_latency_us": 1373.6803493449781 00:11:10.701 } 00:11:10.701 ], 00:11:10.701 "core_count": 1 00:11:10.701 } 00:11:10.701 09:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.701 09:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72824 00:11:10.701 09:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72824 ']' 00:11:10.701 09:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72824 00:11:10.701 09:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:10.701 09:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:10.701 09:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72824 00:11:10.701 killing process with pid 72824 00:11:10.701 09:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:10.701 09:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:10.701 09:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72824' 00:11:10.701 09:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72824 00:11:10.701 09:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72824 00:11:10.701 [2024-12-06 09:48:35.729583] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:10.960 [2024-12-06 09:48:36.047798] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:12.381 09:48:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:12.381 09:48:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ClMHPJhJFt 00:11:12.381 09:48:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:12.381 09:48:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:11:12.381 09:48:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:12.381 09:48:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:12.381 09:48:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:12.381 09:48:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:11:12.381 ************************************ 00:11:12.381 END TEST raid_read_error_test 00:11:12.381 ************************************ 00:11:12.381 00:11:12.381 real 0m4.615s 00:11:12.381 user 0m5.402s 00:11:12.381 sys 0m0.576s 00:11:12.381 09:48:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.381 09:48:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.381 09:48:37 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:12.381 09:48:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:12.381 09:48:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.381 09:48:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:12.381 ************************************ 00:11:12.381 START TEST raid_write_error_test 00:11:12.381 ************************************ 00:11:12.381 09:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:11:12.381 09:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:12.381 09:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:12.381 09:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:12.381 09:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:12.381 09:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:12.381 09:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:12.382 09:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:12.382 09:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:12.382 09:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:12.382 09:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:12.382 09:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:12.382 09:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:12.382 09:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:12.382 09:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:12.382 09:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:12.382 09:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:12.382 09:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:12.382 09:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:12.382 09:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:12.382 09:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:12.382 09:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:12.382 09:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:12.382 09:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:12.382 09:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:12.382 09:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:12.382 09:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:12.382 09:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:12.382 09:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:12.382 09:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Zk33utlwWJ 00:11:12.382 09:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72970 00:11:12.382 09:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:12.382 09:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72970 00:11:12.382 09:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 72970 ']' 00:11:12.382 09:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.382 09:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:12.382 09:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.382 09:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:12.382 09:48:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.382 [2024-12-06 09:48:37.396204] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:11:12.382 [2024-12-06 09:48:37.396420] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72970 ] 00:11:12.382 [2024-12-06 09:48:37.554523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.647 [2024-12-06 09:48:37.667048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.648 [2024-12-06 09:48:37.867663] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.648 [2024-12-06 09:48:37.867766] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.218 BaseBdev1_malloc 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.218 true 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.218 [2024-12-06 09:48:38.284627] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:13.218 [2024-12-06 09:48:38.284762] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.218 [2024-12-06 09:48:38.284788] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:13.218 [2024-12-06 09:48:38.284800] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.218 [2024-12-06 09:48:38.287019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.218 [2024-12-06 09:48:38.287066] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:13.218 BaseBdev1 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.218 BaseBdev2_malloc 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.218 true 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.218 [2024-12-06 09:48:38.350602] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:13.218 [2024-12-06 09:48:38.350658] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.218 [2024-12-06 09:48:38.350674] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:13.218 [2024-12-06 09:48:38.350684] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.218 [2024-12-06 09:48:38.352736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.218 [2024-12-06 09:48:38.352775] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:13.218 BaseBdev2 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.218 BaseBdev3_malloc 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.218 true 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.218 [2024-12-06 09:48:38.428522] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:13.218 [2024-12-06 09:48:38.428619] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.218 [2024-12-06 09:48:38.428658] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:13.218 [2024-12-06 09:48:38.428669] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.218 [2024-12-06 09:48:38.430739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.218 [2024-12-06 09:48:38.430779] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:13.218 BaseBdev3 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:13.218 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.219 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.219 BaseBdev4_malloc 00:11:13.219 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.219 09:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:13.219 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.219 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.479 true 00:11:13.479 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.479 09:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:13.479 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.479 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.479 [2024-12-06 09:48:38.497192] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:13.479 [2024-12-06 09:48:38.497329] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:13.479 [2024-12-06 09:48:38.497375] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:13.479 [2024-12-06 09:48:38.497387] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:13.479 [2024-12-06 09:48:38.499638] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:13.479 [2024-12-06 09:48:38.499678] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:13.479 BaseBdev4 00:11:13.479 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.479 09:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:13.479 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.479 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.479 [2024-12-06 09:48:38.509230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:13.479 [2024-12-06 09:48:38.510976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:13.479 [2024-12-06 09:48:38.511052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:13.479 [2024-12-06 09:48:38.511112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:13.479 [2024-12-06 09:48:38.511361] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:13.479 [2024-12-06 09:48:38.511378] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:13.479 [2024-12-06 09:48:38.511639] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:13.479 [2024-12-06 09:48:38.511822] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:13.479 [2024-12-06 09:48:38.511834] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:13.479 [2024-12-06 09:48:38.512008] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.479 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.479 09:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:13.479 09:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:13.479 09:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.479 09:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:13.479 09:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.479 09:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.479 09:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.479 09:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.479 09:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.479 09:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.479 09:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.479 09:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.479 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.479 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.479 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.479 09:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.479 "name": "raid_bdev1", 00:11:13.479 "uuid": "4a9f4404-4af5-40d8-9f31-115e6db5373b", 00:11:13.479 "strip_size_kb": 64, 00:11:13.479 "state": "online", 00:11:13.479 "raid_level": "concat", 00:11:13.479 "superblock": true, 00:11:13.479 "num_base_bdevs": 4, 00:11:13.479 "num_base_bdevs_discovered": 4, 00:11:13.479 "num_base_bdevs_operational": 4, 00:11:13.479 "base_bdevs_list": [ 00:11:13.479 { 00:11:13.479 "name": "BaseBdev1", 00:11:13.479 "uuid": "f1a49fe4-339e-58bd-9355-95f14cc4f1ff", 00:11:13.479 "is_configured": true, 00:11:13.479 "data_offset": 2048, 00:11:13.479 "data_size": 63488 00:11:13.479 }, 00:11:13.479 { 00:11:13.479 "name": "BaseBdev2", 00:11:13.479 "uuid": "3ae6b263-1489-54ad-813c-ff18304c196a", 00:11:13.479 "is_configured": true, 00:11:13.479 "data_offset": 2048, 00:11:13.479 "data_size": 63488 00:11:13.479 }, 00:11:13.479 { 00:11:13.479 "name": "BaseBdev3", 00:11:13.479 "uuid": "d6d52fe3-5e20-5b48-bd82-78ab8314d24b", 00:11:13.479 "is_configured": true, 00:11:13.479 "data_offset": 2048, 00:11:13.479 "data_size": 63488 00:11:13.479 }, 00:11:13.479 { 00:11:13.479 "name": "BaseBdev4", 00:11:13.479 "uuid": "a8817d21-6f3c-51ea-9a1c-c066188c85c3", 00:11:13.479 "is_configured": true, 00:11:13.479 "data_offset": 2048, 00:11:13.479 "data_size": 63488 00:11:13.479 } 00:11:13.479 ] 00:11:13.479 }' 00:11:13.479 09:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.479 09:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.739 09:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:13.739 09:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:13.999 [2024-12-06 09:48:39.065543] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:14.937 09:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:14.937 09:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.937 09:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.937 09:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.937 09:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:14.937 09:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:14.937 09:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:14.937 09:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:14.937 09:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:14.937 09:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.937 09:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.937 09:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.937 09:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.937 09:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.937 09:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.937 09:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.937 09:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.937 09:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.937 09:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.937 09:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.937 09:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.938 09:48:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.938 09:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.938 "name": "raid_bdev1", 00:11:14.938 "uuid": "4a9f4404-4af5-40d8-9f31-115e6db5373b", 00:11:14.938 "strip_size_kb": 64, 00:11:14.938 "state": "online", 00:11:14.938 "raid_level": "concat", 00:11:14.938 "superblock": true, 00:11:14.938 "num_base_bdevs": 4, 00:11:14.938 "num_base_bdevs_discovered": 4, 00:11:14.938 "num_base_bdevs_operational": 4, 00:11:14.938 "base_bdevs_list": [ 00:11:14.938 { 00:11:14.938 "name": "BaseBdev1", 00:11:14.938 "uuid": "f1a49fe4-339e-58bd-9355-95f14cc4f1ff", 00:11:14.938 "is_configured": true, 00:11:14.938 "data_offset": 2048, 00:11:14.938 "data_size": 63488 00:11:14.938 }, 00:11:14.938 { 00:11:14.938 "name": "BaseBdev2", 00:11:14.938 "uuid": "3ae6b263-1489-54ad-813c-ff18304c196a", 00:11:14.938 "is_configured": true, 00:11:14.938 "data_offset": 2048, 00:11:14.938 "data_size": 63488 00:11:14.938 }, 00:11:14.938 { 00:11:14.938 "name": "BaseBdev3", 00:11:14.938 "uuid": "d6d52fe3-5e20-5b48-bd82-78ab8314d24b", 00:11:14.938 "is_configured": true, 00:11:14.938 "data_offset": 2048, 00:11:14.938 "data_size": 63488 00:11:14.938 }, 00:11:14.938 { 00:11:14.938 "name": "BaseBdev4", 00:11:14.938 "uuid": "a8817d21-6f3c-51ea-9a1c-c066188c85c3", 00:11:14.938 "is_configured": true, 00:11:14.938 "data_offset": 2048, 00:11:14.938 "data_size": 63488 00:11:14.938 } 00:11:14.938 ] 00:11:14.938 }' 00:11:14.938 09:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.938 09:48:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.197 09:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:15.197 09:48:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.197 09:48:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.197 [2024-12-06 09:48:40.429400] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:15.197 [2024-12-06 09:48:40.429438] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:15.197 [2024-12-06 09:48:40.432162] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:15.197 [2024-12-06 09:48:40.432231] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.197 [2024-12-06 09:48:40.432273] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:15.197 [2024-12-06 09:48:40.432286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:15.197 { 00:11:15.197 "results": [ 00:11:15.197 { 00:11:15.197 "job": "raid_bdev1", 00:11:15.197 "core_mask": "0x1", 00:11:15.197 "workload": "randrw", 00:11:15.197 "percentage": 50, 00:11:15.197 "status": "finished", 00:11:15.197 "queue_depth": 1, 00:11:15.197 "io_size": 131072, 00:11:15.197 "runtime": 1.36474, 00:11:15.197 "iops": 15408.063074285212, 00:11:15.197 "mibps": 1926.0078842856515, 00:11:15.197 "io_failed": 1, 00:11:15.197 "io_timeout": 0, 00:11:15.197 "avg_latency_us": 90.02987124663156, 00:11:15.197 "min_latency_us": 25.2646288209607, 00:11:15.197 "max_latency_us": 1445.2262008733624 00:11:15.197 } 00:11:15.197 ], 00:11:15.197 "core_count": 1 00:11:15.197 } 00:11:15.197 09:48:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.197 09:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72970 00:11:15.197 09:48:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 72970 ']' 00:11:15.197 09:48:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 72970 00:11:15.197 09:48:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:15.197 09:48:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:15.197 09:48:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72970 00:11:15.456 killing process with pid 72970 00:11:15.456 09:48:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:15.456 09:48:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:15.456 09:48:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72970' 00:11:15.456 09:48:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 72970 00:11:15.456 [2024-12-06 09:48:40.474825] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:15.456 09:48:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 72970 00:11:15.715 [2024-12-06 09:48:40.804029] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:17.095 09:48:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Zk33utlwWJ 00:11:17.095 09:48:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:17.095 09:48:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:17.095 09:48:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:17.095 09:48:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:17.095 09:48:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:17.095 09:48:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:17.095 09:48:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:17.095 00:11:17.095 real 0m4.712s 00:11:17.095 user 0m5.577s 00:11:17.095 sys 0m0.569s 00:11:17.095 ************************************ 00:11:17.095 END TEST raid_write_error_test 00:11:17.095 ************************************ 00:11:17.095 09:48:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.095 09:48:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.095 09:48:42 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:17.095 09:48:42 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:17.095 09:48:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:17.095 09:48:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.095 09:48:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:17.095 ************************************ 00:11:17.095 START TEST raid_state_function_test 00:11:17.095 ************************************ 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73108 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73108' 00:11:17.095 Process raid pid: 73108 00:11:17.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73108 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73108 ']' 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:17.095 09:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.095 [2024-12-06 09:48:42.161536] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:11:17.095 [2024-12-06 09:48:42.161653] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.095 [2024-12-06 09:48:42.336682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.356 [2024-12-06 09:48:42.452989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.614 [2024-12-06 09:48:42.660825] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:17.614 [2024-12-06 09:48:42.660868] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:17.874 09:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.874 09:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:17.874 09:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:17.874 09:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.874 09:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.874 [2024-12-06 09:48:42.999937] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:17.874 [2024-12-06 09:48:42.999995] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:17.874 [2024-12-06 09:48:43.000012] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:17.874 [2024-12-06 09:48:43.000024] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:17.874 [2024-12-06 09:48:43.000031] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:17.874 [2024-12-06 09:48:43.000041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:17.874 [2024-12-06 09:48:43.000048] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:17.874 [2024-12-06 09:48:43.000057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:17.874 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.874 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:17.874 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.874 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.874 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.874 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.874 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.874 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.874 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.874 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.874 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.874 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.874 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.874 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.874 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.874 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.874 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.874 "name": "Existed_Raid", 00:11:17.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.874 "strip_size_kb": 0, 00:11:17.874 "state": "configuring", 00:11:17.874 "raid_level": "raid1", 00:11:17.874 "superblock": false, 00:11:17.874 "num_base_bdevs": 4, 00:11:17.874 "num_base_bdevs_discovered": 0, 00:11:17.874 "num_base_bdevs_operational": 4, 00:11:17.874 "base_bdevs_list": [ 00:11:17.874 { 00:11:17.874 "name": "BaseBdev1", 00:11:17.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.874 "is_configured": false, 00:11:17.874 "data_offset": 0, 00:11:17.874 "data_size": 0 00:11:17.874 }, 00:11:17.874 { 00:11:17.874 "name": "BaseBdev2", 00:11:17.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.874 "is_configured": false, 00:11:17.874 "data_offset": 0, 00:11:17.874 "data_size": 0 00:11:17.874 }, 00:11:17.874 { 00:11:17.874 "name": "BaseBdev3", 00:11:17.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.874 "is_configured": false, 00:11:17.874 "data_offset": 0, 00:11:17.874 "data_size": 0 00:11:17.874 }, 00:11:17.874 { 00:11:17.874 "name": "BaseBdev4", 00:11:17.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.874 "is_configured": false, 00:11:17.874 "data_offset": 0, 00:11:17.874 "data_size": 0 00:11:17.874 } 00:11:17.874 ] 00:11:17.874 }' 00:11:17.874 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.874 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.443 [2024-12-06 09:48:43.411169] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:18.443 [2024-12-06 09:48:43.411211] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.443 [2024-12-06 09:48:43.423133] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:18.443 [2024-12-06 09:48:43.423187] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:18.443 [2024-12-06 09:48:43.423197] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:18.443 [2024-12-06 09:48:43.423206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:18.443 [2024-12-06 09:48:43.423212] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:18.443 [2024-12-06 09:48:43.423221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:18.443 [2024-12-06 09:48:43.423227] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:18.443 [2024-12-06 09:48:43.423236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.443 [2024-12-06 09:48:43.469959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:18.443 BaseBdev1 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.443 [ 00:11:18.443 { 00:11:18.443 "name": "BaseBdev1", 00:11:18.443 "aliases": [ 00:11:18.443 "6bb4210a-a731-4ef5-b254-32100c15af5e" 00:11:18.443 ], 00:11:18.443 "product_name": "Malloc disk", 00:11:18.443 "block_size": 512, 00:11:18.443 "num_blocks": 65536, 00:11:18.443 "uuid": "6bb4210a-a731-4ef5-b254-32100c15af5e", 00:11:18.443 "assigned_rate_limits": { 00:11:18.443 "rw_ios_per_sec": 0, 00:11:18.443 "rw_mbytes_per_sec": 0, 00:11:18.443 "r_mbytes_per_sec": 0, 00:11:18.443 "w_mbytes_per_sec": 0 00:11:18.443 }, 00:11:18.443 "claimed": true, 00:11:18.443 "claim_type": "exclusive_write", 00:11:18.443 "zoned": false, 00:11:18.443 "supported_io_types": { 00:11:18.443 "read": true, 00:11:18.443 "write": true, 00:11:18.443 "unmap": true, 00:11:18.443 "flush": true, 00:11:18.443 "reset": true, 00:11:18.443 "nvme_admin": false, 00:11:18.443 "nvme_io": false, 00:11:18.443 "nvme_io_md": false, 00:11:18.443 "write_zeroes": true, 00:11:18.443 "zcopy": true, 00:11:18.443 "get_zone_info": false, 00:11:18.443 "zone_management": false, 00:11:18.443 "zone_append": false, 00:11:18.443 "compare": false, 00:11:18.443 "compare_and_write": false, 00:11:18.443 "abort": true, 00:11:18.443 "seek_hole": false, 00:11:18.443 "seek_data": false, 00:11:18.443 "copy": true, 00:11:18.443 "nvme_iov_md": false 00:11:18.443 }, 00:11:18.443 "memory_domains": [ 00:11:18.443 { 00:11:18.443 "dma_device_id": "system", 00:11:18.443 "dma_device_type": 1 00:11:18.443 }, 00:11:18.443 { 00:11:18.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.443 "dma_device_type": 2 00:11:18.443 } 00:11:18.443 ], 00:11:18.443 "driver_specific": {} 00:11:18.443 } 00:11:18.443 ] 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.443 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.443 "name": "Existed_Raid", 00:11:18.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.443 "strip_size_kb": 0, 00:11:18.443 "state": "configuring", 00:11:18.443 "raid_level": "raid1", 00:11:18.443 "superblock": false, 00:11:18.443 "num_base_bdevs": 4, 00:11:18.443 "num_base_bdevs_discovered": 1, 00:11:18.443 "num_base_bdevs_operational": 4, 00:11:18.444 "base_bdevs_list": [ 00:11:18.444 { 00:11:18.444 "name": "BaseBdev1", 00:11:18.444 "uuid": "6bb4210a-a731-4ef5-b254-32100c15af5e", 00:11:18.444 "is_configured": true, 00:11:18.444 "data_offset": 0, 00:11:18.444 "data_size": 65536 00:11:18.444 }, 00:11:18.444 { 00:11:18.444 "name": "BaseBdev2", 00:11:18.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.444 "is_configured": false, 00:11:18.444 "data_offset": 0, 00:11:18.444 "data_size": 0 00:11:18.444 }, 00:11:18.444 { 00:11:18.444 "name": "BaseBdev3", 00:11:18.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.444 "is_configured": false, 00:11:18.444 "data_offset": 0, 00:11:18.444 "data_size": 0 00:11:18.444 }, 00:11:18.444 { 00:11:18.444 "name": "BaseBdev4", 00:11:18.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.444 "is_configured": false, 00:11:18.444 "data_offset": 0, 00:11:18.444 "data_size": 0 00:11:18.444 } 00:11:18.444 ] 00:11:18.444 }' 00:11:18.444 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.444 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.703 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:18.703 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.703 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.703 [2024-12-06 09:48:43.973177] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:18.703 [2024-12-06 09:48:43.973234] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:18.962 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.962 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:18.962 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.962 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.962 [2024-12-06 09:48:43.985190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:18.962 [2024-12-06 09:48:43.986983] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:18.962 [2024-12-06 09:48:43.987029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:18.962 [2024-12-06 09:48:43.987039] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:18.962 [2024-12-06 09:48:43.987048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:18.962 [2024-12-06 09:48:43.987055] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:18.962 [2024-12-06 09:48:43.987063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:18.962 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.962 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:18.962 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:18.963 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:18.963 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.963 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.963 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.963 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.963 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.963 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.963 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.963 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.963 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.963 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.963 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.963 09:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.963 09:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.963 09:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.963 09:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.963 "name": "Existed_Raid", 00:11:18.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.963 "strip_size_kb": 0, 00:11:18.963 "state": "configuring", 00:11:18.963 "raid_level": "raid1", 00:11:18.963 "superblock": false, 00:11:18.963 "num_base_bdevs": 4, 00:11:18.963 "num_base_bdevs_discovered": 1, 00:11:18.963 "num_base_bdevs_operational": 4, 00:11:18.963 "base_bdevs_list": [ 00:11:18.963 { 00:11:18.963 "name": "BaseBdev1", 00:11:18.963 "uuid": "6bb4210a-a731-4ef5-b254-32100c15af5e", 00:11:18.963 "is_configured": true, 00:11:18.963 "data_offset": 0, 00:11:18.963 "data_size": 65536 00:11:18.963 }, 00:11:18.963 { 00:11:18.963 "name": "BaseBdev2", 00:11:18.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.963 "is_configured": false, 00:11:18.963 "data_offset": 0, 00:11:18.963 "data_size": 0 00:11:18.963 }, 00:11:18.963 { 00:11:18.963 "name": "BaseBdev3", 00:11:18.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.963 "is_configured": false, 00:11:18.963 "data_offset": 0, 00:11:18.963 "data_size": 0 00:11:18.963 }, 00:11:18.963 { 00:11:18.963 "name": "BaseBdev4", 00:11:18.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.963 "is_configured": false, 00:11:18.963 "data_offset": 0, 00:11:18.963 "data_size": 0 00:11:18.963 } 00:11:18.963 ] 00:11:18.963 }' 00:11:18.963 09:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.963 09:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.223 09:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:19.223 09:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.223 09:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.223 [2024-12-06 09:48:44.476354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:19.223 BaseBdev2 00:11:19.223 09:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.223 09:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:19.223 09:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:19.223 09:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:19.223 09:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:19.223 09:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:19.223 09:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:19.223 09:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:19.223 09:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.223 09:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.223 09:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.223 09:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:19.223 09:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.223 09:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.490 [ 00:11:19.490 { 00:11:19.490 "name": "BaseBdev2", 00:11:19.490 "aliases": [ 00:11:19.490 "41f29e58-d3dc-4c68-99c0-9c73128a3063" 00:11:19.490 ], 00:11:19.490 "product_name": "Malloc disk", 00:11:19.490 "block_size": 512, 00:11:19.490 "num_blocks": 65536, 00:11:19.490 "uuid": "41f29e58-d3dc-4c68-99c0-9c73128a3063", 00:11:19.490 "assigned_rate_limits": { 00:11:19.490 "rw_ios_per_sec": 0, 00:11:19.490 "rw_mbytes_per_sec": 0, 00:11:19.490 "r_mbytes_per_sec": 0, 00:11:19.490 "w_mbytes_per_sec": 0 00:11:19.490 }, 00:11:19.490 "claimed": true, 00:11:19.490 "claim_type": "exclusive_write", 00:11:19.490 "zoned": false, 00:11:19.490 "supported_io_types": { 00:11:19.490 "read": true, 00:11:19.490 "write": true, 00:11:19.490 "unmap": true, 00:11:19.490 "flush": true, 00:11:19.490 "reset": true, 00:11:19.490 "nvme_admin": false, 00:11:19.490 "nvme_io": false, 00:11:19.490 "nvme_io_md": false, 00:11:19.490 "write_zeroes": true, 00:11:19.490 "zcopy": true, 00:11:19.490 "get_zone_info": false, 00:11:19.490 "zone_management": false, 00:11:19.490 "zone_append": false, 00:11:19.490 "compare": false, 00:11:19.490 "compare_and_write": false, 00:11:19.490 "abort": true, 00:11:19.490 "seek_hole": false, 00:11:19.490 "seek_data": false, 00:11:19.490 "copy": true, 00:11:19.490 "nvme_iov_md": false 00:11:19.490 }, 00:11:19.490 "memory_domains": [ 00:11:19.490 { 00:11:19.490 "dma_device_id": "system", 00:11:19.490 "dma_device_type": 1 00:11:19.490 }, 00:11:19.490 { 00:11:19.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.490 "dma_device_type": 2 00:11:19.490 } 00:11:19.490 ], 00:11:19.490 "driver_specific": {} 00:11:19.490 } 00:11:19.490 ] 00:11:19.490 09:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.490 09:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:19.490 09:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:19.490 09:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:19.490 09:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:19.490 09:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.490 09:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.490 09:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.490 09:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.490 09:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.490 09:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.490 09:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.490 09:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.490 09:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.490 09:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.490 09:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.490 09:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.490 09:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.490 09:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.490 09:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.490 "name": "Existed_Raid", 00:11:19.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.490 "strip_size_kb": 0, 00:11:19.490 "state": "configuring", 00:11:19.490 "raid_level": "raid1", 00:11:19.490 "superblock": false, 00:11:19.490 "num_base_bdevs": 4, 00:11:19.490 "num_base_bdevs_discovered": 2, 00:11:19.490 "num_base_bdevs_operational": 4, 00:11:19.490 "base_bdevs_list": [ 00:11:19.490 { 00:11:19.490 "name": "BaseBdev1", 00:11:19.490 "uuid": "6bb4210a-a731-4ef5-b254-32100c15af5e", 00:11:19.490 "is_configured": true, 00:11:19.490 "data_offset": 0, 00:11:19.490 "data_size": 65536 00:11:19.490 }, 00:11:19.490 { 00:11:19.490 "name": "BaseBdev2", 00:11:19.490 "uuid": "41f29e58-d3dc-4c68-99c0-9c73128a3063", 00:11:19.490 "is_configured": true, 00:11:19.490 "data_offset": 0, 00:11:19.490 "data_size": 65536 00:11:19.490 }, 00:11:19.490 { 00:11:19.490 "name": "BaseBdev3", 00:11:19.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.490 "is_configured": false, 00:11:19.490 "data_offset": 0, 00:11:19.490 "data_size": 0 00:11:19.490 }, 00:11:19.490 { 00:11:19.490 "name": "BaseBdev4", 00:11:19.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.490 "is_configured": false, 00:11:19.490 "data_offset": 0, 00:11:19.490 "data_size": 0 00:11:19.490 } 00:11:19.490 ] 00:11:19.490 }' 00:11:19.490 09:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.490 09:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.762 09:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:19.762 09:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.762 09:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.762 [2024-12-06 09:48:44.998043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:19.762 BaseBdev3 00:11:19.762 09:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.762 09:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:19.762 09:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:19.762 09:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:19.762 09:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:19.762 09:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:19.762 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:19.762 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:19.762 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.762 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.762 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.762 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:19.762 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.762 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.762 [ 00:11:19.762 { 00:11:19.762 "name": "BaseBdev3", 00:11:19.762 "aliases": [ 00:11:19.762 "e8fc7e42-f003-4b8e-b7dc-8f8f46aaf967" 00:11:19.762 ], 00:11:19.762 "product_name": "Malloc disk", 00:11:19.762 "block_size": 512, 00:11:19.762 "num_blocks": 65536, 00:11:19.762 "uuid": "e8fc7e42-f003-4b8e-b7dc-8f8f46aaf967", 00:11:19.762 "assigned_rate_limits": { 00:11:19.762 "rw_ios_per_sec": 0, 00:11:19.762 "rw_mbytes_per_sec": 0, 00:11:19.762 "r_mbytes_per_sec": 0, 00:11:19.762 "w_mbytes_per_sec": 0 00:11:19.762 }, 00:11:19.762 "claimed": true, 00:11:19.762 "claim_type": "exclusive_write", 00:11:19.762 "zoned": false, 00:11:19.762 "supported_io_types": { 00:11:19.762 "read": true, 00:11:19.762 "write": true, 00:11:19.762 "unmap": true, 00:11:19.762 "flush": true, 00:11:19.762 "reset": true, 00:11:19.762 "nvme_admin": false, 00:11:19.762 "nvme_io": false, 00:11:19.762 "nvme_io_md": false, 00:11:19.762 "write_zeroes": true, 00:11:19.762 "zcopy": true, 00:11:19.762 "get_zone_info": false, 00:11:19.762 "zone_management": false, 00:11:19.762 "zone_append": false, 00:11:19.762 "compare": false, 00:11:19.762 "compare_and_write": false, 00:11:19.762 "abort": true, 00:11:20.021 "seek_hole": false, 00:11:20.021 "seek_data": false, 00:11:20.021 "copy": true, 00:11:20.021 "nvme_iov_md": false 00:11:20.021 }, 00:11:20.021 "memory_domains": [ 00:11:20.021 { 00:11:20.021 "dma_device_id": "system", 00:11:20.021 "dma_device_type": 1 00:11:20.021 }, 00:11:20.021 { 00:11:20.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.021 "dma_device_type": 2 00:11:20.021 } 00:11:20.021 ], 00:11:20.021 "driver_specific": {} 00:11:20.021 } 00:11:20.021 ] 00:11:20.021 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.021 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:20.021 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:20.021 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:20.021 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:20.021 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.021 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.021 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.021 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.021 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.021 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.021 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.021 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.021 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.021 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.021 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.021 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.021 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.021 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.021 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.021 "name": "Existed_Raid", 00:11:20.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.021 "strip_size_kb": 0, 00:11:20.021 "state": "configuring", 00:11:20.021 "raid_level": "raid1", 00:11:20.021 "superblock": false, 00:11:20.021 "num_base_bdevs": 4, 00:11:20.021 "num_base_bdevs_discovered": 3, 00:11:20.021 "num_base_bdevs_operational": 4, 00:11:20.021 "base_bdevs_list": [ 00:11:20.021 { 00:11:20.021 "name": "BaseBdev1", 00:11:20.021 "uuid": "6bb4210a-a731-4ef5-b254-32100c15af5e", 00:11:20.021 "is_configured": true, 00:11:20.021 "data_offset": 0, 00:11:20.021 "data_size": 65536 00:11:20.021 }, 00:11:20.021 { 00:11:20.021 "name": "BaseBdev2", 00:11:20.021 "uuid": "41f29e58-d3dc-4c68-99c0-9c73128a3063", 00:11:20.021 "is_configured": true, 00:11:20.021 "data_offset": 0, 00:11:20.021 "data_size": 65536 00:11:20.021 }, 00:11:20.021 { 00:11:20.021 "name": "BaseBdev3", 00:11:20.021 "uuid": "e8fc7e42-f003-4b8e-b7dc-8f8f46aaf967", 00:11:20.021 "is_configured": true, 00:11:20.021 "data_offset": 0, 00:11:20.021 "data_size": 65536 00:11:20.021 }, 00:11:20.021 { 00:11:20.022 "name": "BaseBdev4", 00:11:20.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.022 "is_configured": false, 00:11:20.022 "data_offset": 0, 00:11:20.022 "data_size": 0 00:11:20.022 } 00:11:20.022 ] 00:11:20.022 }' 00:11:20.022 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.022 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.280 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:20.280 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.280 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.280 [2024-12-06 09:48:45.511102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:20.280 [2024-12-06 09:48:45.511178] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:20.280 [2024-12-06 09:48:45.511188] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:20.280 [2024-12-06 09:48:45.511468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:20.280 [2024-12-06 09:48:45.511663] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:20.280 [2024-12-06 09:48:45.511683] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:20.280 [2024-12-06 09:48:45.511983] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.280 BaseBdev4 00:11:20.280 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.280 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:20.280 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:20.280 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:20.280 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:20.280 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:20.280 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:20.280 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:20.280 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.280 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.280 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.281 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:20.281 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.281 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.281 [ 00:11:20.281 { 00:11:20.281 "name": "BaseBdev4", 00:11:20.281 "aliases": [ 00:11:20.281 "d4c19830-d04e-4cf2-a888-73e75bfbbaa6" 00:11:20.281 ], 00:11:20.281 "product_name": "Malloc disk", 00:11:20.281 "block_size": 512, 00:11:20.281 "num_blocks": 65536, 00:11:20.281 "uuid": "d4c19830-d04e-4cf2-a888-73e75bfbbaa6", 00:11:20.281 "assigned_rate_limits": { 00:11:20.281 "rw_ios_per_sec": 0, 00:11:20.281 "rw_mbytes_per_sec": 0, 00:11:20.281 "r_mbytes_per_sec": 0, 00:11:20.281 "w_mbytes_per_sec": 0 00:11:20.281 }, 00:11:20.281 "claimed": true, 00:11:20.281 "claim_type": "exclusive_write", 00:11:20.281 "zoned": false, 00:11:20.281 "supported_io_types": { 00:11:20.281 "read": true, 00:11:20.281 "write": true, 00:11:20.281 "unmap": true, 00:11:20.281 "flush": true, 00:11:20.281 "reset": true, 00:11:20.281 "nvme_admin": false, 00:11:20.281 "nvme_io": false, 00:11:20.281 "nvme_io_md": false, 00:11:20.281 "write_zeroes": true, 00:11:20.281 "zcopy": true, 00:11:20.281 "get_zone_info": false, 00:11:20.281 "zone_management": false, 00:11:20.281 "zone_append": false, 00:11:20.281 "compare": false, 00:11:20.281 "compare_and_write": false, 00:11:20.281 "abort": true, 00:11:20.281 "seek_hole": false, 00:11:20.281 "seek_data": false, 00:11:20.281 "copy": true, 00:11:20.281 "nvme_iov_md": false 00:11:20.281 }, 00:11:20.281 "memory_domains": [ 00:11:20.281 { 00:11:20.281 "dma_device_id": "system", 00:11:20.281 "dma_device_type": 1 00:11:20.281 }, 00:11:20.281 { 00:11:20.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.281 "dma_device_type": 2 00:11:20.281 } 00:11:20.281 ], 00:11:20.281 "driver_specific": {} 00:11:20.281 } 00:11:20.281 ] 00:11:20.281 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.281 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:20.281 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:20.281 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:20.281 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:20.281 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.281 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:20.281 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.539 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.539 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.539 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.540 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.540 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.540 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.540 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.540 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.540 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.540 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.540 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.540 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.540 "name": "Existed_Raid", 00:11:20.540 "uuid": "86d4e4c6-5c59-420e-a710-850254c052eb", 00:11:20.540 "strip_size_kb": 0, 00:11:20.540 "state": "online", 00:11:20.540 "raid_level": "raid1", 00:11:20.540 "superblock": false, 00:11:20.540 "num_base_bdevs": 4, 00:11:20.540 "num_base_bdevs_discovered": 4, 00:11:20.540 "num_base_bdevs_operational": 4, 00:11:20.540 "base_bdevs_list": [ 00:11:20.540 { 00:11:20.540 "name": "BaseBdev1", 00:11:20.540 "uuid": "6bb4210a-a731-4ef5-b254-32100c15af5e", 00:11:20.540 "is_configured": true, 00:11:20.540 "data_offset": 0, 00:11:20.540 "data_size": 65536 00:11:20.540 }, 00:11:20.540 { 00:11:20.540 "name": "BaseBdev2", 00:11:20.540 "uuid": "41f29e58-d3dc-4c68-99c0-9c73128a3063", 00:11:20.540 "is_configured": true, 00:11:20.540 "data_offset": 0, 00:11:20.540 "data_size": 65536 00:11:20.540 }, 00:11:20.540 { 00:11:20.540 "name": "BaseBdev3", 00:11:20.540 "uuid": "e8fc7e42-f003-4b8e-b7dc-8f8f46aaf967", 00:11:20.540 "is_configured": true, 00:11:20.540 "data_offset": 0, 00:11:20.540 "data_size": 65536 00:11:20.540 }, 00:11:20.540 { 00:11:20.540 "name": "BaseBdev4", 00:11:20.540 "uuid": "d4c19830-d04e-4cf2-a888-73e75bfbbaa6", 00:11:20.540 "is_configured": true, 00:11:20.540 "data_offset": 0, 00:11:20.540 "data_size": 65536 00:11:20.540 } 00:11:20.540 ] 00:11:20.540 }' 00:11:20.540 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.540 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.799 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:20.799 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:20.799 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:20.799 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:20.799 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:20.799 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:20.799 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:20.799 09:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:20.799 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.799 09:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.799 [2024-12-06 09:48:46.002678] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:20.799 09:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.799 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:20.799 "name": "Existed_Raid", 00:11:20.799 "aliases": [ 00:11:20.799 "86d4e4c6-5c59-420e-a710-850254c052eb" 00:11:20.799 ], 00:11:20.799 "product_name": "Raid Volume", 00:11:20.799 "block_size": 512, 00:11:20.799 "num_blocks": 65536, 00:11:20.799 "uuid": "86d4e4c6-5c59-420e-a710-850254c052eb", 00:11:20.799 "assigned_rate_limits": { 00:11:20.799 "rw_ios_per_sec": 0, 00:11:20.799 "rw_mbytes_per_sec": 0, 00:11:20.799 "r_mbytes_per_sec": 0, 00:11:20.799 "w_mbytes_per_sec": 0 00:11:20.799 }, 00:11:20.799 "claimed": false, 00:11:20.799 "zoned": false, 00:11:20.799 "supported_io_types": { 00:11:20.799 "read": true, 00:11:20.799 "write": true, 00:11:20.799 "unmap": false, 00:11:20.799 "flush": false, 00:11:20.799 "reset": true, 00:11:20.799 "nvme_admin": false, 00:11:20.799 "nvme_io": false, 00:11:20.799 "nvme_io_md": false, 00:11:20.799 "write_zeroes": true, 00:11:20.799 "zcopy": false, 00:11:20.799 "get_zone_info": false, 00:11:20.799 "zone_management": false, 00:11:20.799 "zone_append": false, 00:11:20.799 "compare": false, 00:11:20.799 "compare_and_write": false, 00:11:20.799 "abort": false, 00:11:20.799 "seek_hole": false, 00:11:20.799 "seek_data": false, 00:11:20.799 "copy": false, 00:11:20.799 "nvme_iov_md": false 00:11:20.799 }, 00:11:20.799 "memory_domains": [ 00:11:20.799 { 00:11:20.799 "dma_device_id": "system", 00:11:20.799 "dma_device_type": 1 00:11:20.799 }, 00:11:20.799 { 00:11:20.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.799 "dma_device_type": 2 00:11:20.799 }, 00:11:20.799 { 00:11:20.799 "dma_device_id": "system", 00:11:20.799 "dma_device_type": 1 00:11:20.799 }, 00:11:20.799 { 00:11:20.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.799 "dma_device_type": 2 00:11:20.799 }, 00:11:20.799 { 00:11:20.799 "dma_device_id": "system", 00:11:20.799 "dma_device_type": 1 00:11:20.799 }, 00:11:20.799 { 00:11:20.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.799 "dma_device_type": 2 00:11:20.799 }, 00:11:20.799 { 00:11:20.799 "dma_device_id": "system", 00:11:20.799 "dma_device_type": 1 00:11:20.799 }, 00:11:20.799 { 00:11:20.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.799 "dma_device_type": 2 00:11:20.799 } 00:11:20.799 ], 00:11:20.799 "driver_specific": { 00:11:20.799 "raid": { 00:11:20.799 "uuid": "86d4e4c6-5c59-420e-a710-850254c052eb", 00:11:20.799 "strip_size_kb": 0, 00:11:20.799 "state": "online", 00:11:20.799 "raid_level": "raid1", 00:11:20.799 "superblock": false, 00:11:20.799 "num_base_bdevs": 4, 00:11:20.799 "num_base_bdevs_discovered": 4, 00:11:20.799 "num_base_bdevs_operational": 4, 00:11:20.799 "base_bdevs_list": [ 00:11:20.799 { 00:11:20.799 "name": "BaseBdev1", 00:11:20.799 "uuid": "6bb4210a-a731-4ef5-b254-32100c15af5e", 00:11:20.799 "is_configured": true, 00:11:20.799 "data_offset": 0, 00:11:20.799 "data_size": 65536 00:11:20.799 }, 00:11:20.799 { 00:11:20.799 "name": "BaseBdev2", 00:11:20.799 "uuid": "41f29e58-d3dc-4c68-99c0-9c73128a3063", 00:11:20.799 "is_configured": true, 00:11:20.799 "data_offset": 0, 00:11:20.799 "data_size": 65536 00:11:20.799 }, 00:11:20.799 { 00:11:20.799 "name": "BaseBdev3", 00:11:20.799 "uuid": "e8fc7e42-f003-4b8e-b7dc-8f8f46aaf967", 00:11:20.799 "is_configured": true, 00:11:20.799 "data_offset": 0, 00:11:20.799 "data_size": 65536 00:11:20.799 }, 00:11:20.799 { 00:11:20.799 "name": "BaseBdev4", 00:11:20.799 "uuid": "d4c19830-d04e-4cf2-a888-73e75bfbbaa6", 00:11:20.799 "is_configured": true, 00:11:20.799 "data_offset": 0, 00:11:20.799 "data_size": 65536 00:11:20.799 } 00:11:20.799 ] 00:11:20.799 } 00:11:20.799 } 00:11:20.799 }' 00:11:20.799 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:21.059 BaseBdev2 00:11:21.059 BaseBdev3 00:11:21.059 BaseBdev4' 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.059 09:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.059 [2024-12-06 09:48:46.293871] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:21.319 09:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.319 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:21.319 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:21.319 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:21.319 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:21.319 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:21.319 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:21.319 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.319 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.319 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.319 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.319 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:21.319 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.319 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.319 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.319 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.319 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.319 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.319 09:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.319 09:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.319 09:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.319 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.319 "name": "Existed_Raid", 00:11:21.319 "uuid": "86d4e4c6-5c59-420e-a710-850254c052eb", 00:11:21.319 "strip_size_kb": 0, 00:11:21.319 "state": "online", 00:11:21.319 "raid_level": "raid1", 00:11:21.319 "superblock": false, 00:11:21.319 "num_base_bdevs": 4, 00:11:21.319 "num_base_bdevs_discovered": 3, 00:11:21.319 "num_base_bdevs_operational": 3, 00:11:21.319 "base_bdevs_list": [ 00:11:21.319 { 00:11:21.319 "name": null, 00:11:21.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.319 "is_configured": false, 00:11:21.319 "data_offset": 0, 00:11:21.319 "data_size": 65536 00:11:21.319 }, 00:11:21.319 { 00:11:21.319 "name": "BaseBdev2", 00:11:21.319 "uuid": "41f29e58-d3dc-4c68-99c0-9c73128a3063", 00:11:21.319 "is_configured": true, 00:11:21.319 "data_offset": 0, 00:11:21.319 "data_size": 65536 00:11:21.319 }, 00:11:21.319 { 00:11:21.319 "name": "BaseBdev3", 00:11:21.319 "uuid": "e8fc7e42-f003-4b8e-b7dc-8f8f46aaf967", 00:11:21.319 "is_configured": true, 00:11:21.319 "data_offset": 0, 00:11:21.319 "data_size": 65536 00:11:21.319 }, 00:11:21.319 { 00:11:21.319 "name": "BaseBdev4", 00:11:21.319 "uuid": "d4c19830-d04e-4cf2-a888-73e75bfbbaa6", 00:11:21.319 "is_configured": true, 00:11:21.319 "data_offset": 0, 00:11:21.319 "data_size": 65536 00:11:21.319 } 00:11:21.319 ] 00:11:21.319 }' 00:11:21.319 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.319 09:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.577 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:21.577 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:21.577 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.577 09:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.577 09:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.577 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:21.834 09:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.834 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:21.834 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:21.834 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:21.834 09:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.834 09:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.834 [2024-12-06 09:48:46.885495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:21.834 09:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.834 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:21.834 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:21.834 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.834 09:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:21.834 09:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.834 09:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.834 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.834 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:21.834 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:21.834 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:21.834 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.834 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.834 [2024-12-06 09:48:47.036743] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:22.091 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.091 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:22.091 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:22.091 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.091 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.091 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:22.091 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.091 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.091 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:22.091 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:22.092 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:22.092 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.092 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.092 [2024-12-06 09:48:47.195694] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:22.092 [2024-12-06 09:48:47.195799] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:22.092 [2024-12-06 09:48:47.290082] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:22.092 [2024-12-06 09:48:47.290153] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:22.092 [2024-12-06 09:48:47.290187] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:22.092 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.092 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:22.092 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:22.092 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:22.092 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.092 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.092 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.092 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.092 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:22.092 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:22.092 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:22.092 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:22.092 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:22.092 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:22.092 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.092 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.350 BaseBdev2 00:11:22.350 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.350 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:22.350 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:22.350 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:22.350 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:22.350 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:22.350 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:22.350 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:22.350 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.350 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.350 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.350 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:22.350 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.350 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.350 [ 00:11:22.350 { 00:11:22.350 "name": "BaseBdev2", 00:11:22.350 "aliases": [ 00:11:22.350 "763ff163-09e8-44ce-a2d6-8b55075f5cda" 00:11:22.350 ], 00:11:22.350 "product_name": "Malloc disk", 00:11:22.350 "block_size": 512, 00:11:22.350 "num_blocks": 65536, 00:11:22.350 "uuid": "763ff163-09e8-44ce-a2d6-8b55075f5cda", 00:11:22.350 "assigned_rate_limits": { 00:11:22.350 "rw_ios_per_sec": 0, 00:11:22.350 "rw_mbytes_per_sec": 0, 00:11:22.350 "r_mbytes_per_sec": 0, 00:11:22.350 "w_mbytes_per_sec": 0 00:11:22.350 }, 00:11:22.350 "claimed": false, 00:11:22.350 "zoned": false, 00:11:22.350 "supported_io_types": { 00:11:22.350 "read": true, 00:11:22.350 "write": true, 00:11:22.350 "unmap": true, 00:11:22.350 "flush": true, 00:11:22.350 "reset": true, 00:11:22.350 "nvme_admin": false, 00:11:22.350 "nvme_io": false, 00:11:22.350 "nvme_io_md": false, 00:11:22.350 "write_zeroes": true, 00:11:22.350 "zcopy": true, 00:11:22.350 "get_zone_info": false, 00:11:22.350 "zone_management": false, 00:11:22.350 "zone_append": false, 00:11:22.350 "compare": false, 00:11:22.350 "compare_and_write": false, 00:11:22.350 "abort": true, 00:11:22.350 "seek_hole": false, 00:11:22.350 "seek_data": false, 00:11:22.350 "copy": true, 00:11:22.350 "nvme_iov_md": false 00:11:22.350 }, 00:11:22.350 "memory_domains": [ 00:11:22.350 { 00:11:22.350 "dma_device_id": "system", 00:11:22.350 "dma_device_type": 1 00:11:22.350 }, 00:11:22.350 { 00:11:22.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.350 "dma_device_type": 2 00:11:22.350 } 00:11:22.350 ], 00:11:22.350 "driver_specific": {} 00:11:22.350 } 00:11:22.350 ] 00:11:22.350 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.350 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:22.350 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:22.350 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:22.350 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:22.350 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.350 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.350 BaseBdev3 00:11:22.350 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.350 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:22.350 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:22.350 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:22.350 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:22.350 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:22.350 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:22.350 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:22.350 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.350 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.350 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.350 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:22.350 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.350 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.350 [ 00:11:22.350 { 00:11:22.350 "name": "BaseBdev3", 00:11:22.351 "aliases": [ 00:11:22.351 "623df6b1-b2ae-468c-836d-c60fef7fb6ed" 00:11:22.351 ], 00:11:22.351 "product_name": "Malloc disk", 00:11:22.351 "block_size": 512, 00:11:22.351 "num_blocks": 65536, 00:11:22.351 "uuid": "623df6b1-b2ae-468c-836d-c60fef7fb6ed", 00:11:22.351 "assigned_rate_limits": { 00:11:22.351 "rw_ios_per_sec": 0, 00:11:22.351 "rw_mbytes_per_sec": 0, 00:11:22.351 "r_mbytes_per_sec": 0, 00:11:22.351 "w_mbytes_per_sec": 0 00:11:22.351 }, 00:11:22.351 "claimed": false, 00:11:22.351 "zoned": false, 00:11:22.351 "supported_io_types": { 00:11:22.351 "read": true, 00:11:22.351 "write": true, 00:11:22.351 "unmap": true, 00:11:22.351 "flush": true, 00:11:22.351 "reset": true, 00:11:22.351 "nvme_admin": false, 00:11:22.351 "nvme_io": false, 00:11:22.351 "nvme_io_md": false, 00:11:22.351 "write_zeroes": true, 00:11:22.351 "zcopy": true, 00:11:22.351 "get_zone_info": false, 00:11:22.351 "zone_management": false, 00:11:22.351 "zone_append": false, 00:11:22.351 "compare": false, 00:11:22.351 "compare_and_write": false, 00:11:22.351 "abort": true, 00:11:22.351 "seek_hole": false, 00:11:22.351 "seek_data": false, 00:11:22.351 "copy": true, 00:11:22.351 "nvme_iov_md": false 00:11:22.351 }, 00:11:22.351 "memory_domains": [ 00:11:22.351 { 00:11:22.351 "dma_device_id": "system", 00:11:22.351 "dma_device_type": 1 00:11:22.351 }, 00:11:22.351 { 00:11:22.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.351 "dma_device_type": 2 00:11:22.351 } 00:11:22.351 ], 00:11:22.351 "driver_specific": {} 00:11:22.351 } 00:11:22.351 ] 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.351 BaseBdev4 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.351 [ 00:11:22.351 { 00:11:22.351 "name": "BaseBdev4", 00:11:22.351 "aliases": [ 00:11:22.351 "94b3eb5b-b5ba-46ae-9c61-194ddb9cae9b" 00:11:22.351 ], 00:11:22.351 "product_name": "Malloc disk", 00:11:22.351 "block_size": 512, 00:11:22.351 "num_blocks": 65536, 00:11:22.351 "uuid": "94b3eb5b-b5ba-46ae-9c61-194ddb9cae9b", 00:11:22.351 "assigned_rate_limits": { 00:11:22.351 "rw_ios_per_sec": 0, 00:11:22.351 "rw_mbytes_per_sec": 0, 00:11:22.351 "r_mbytes_per_sec": 0, 00:11:22.351 "w_mbytes_per_sec": 0 00:11:22.351 }, 00:11:22.351 "claimed": false, 00:11:22.351 "zoned": false, 00:11:22.351 "supported_io_types": { 00:11:22.351 "read": true, 00:11:22.351 "write": true, 00:11:22.351 "unmap": true, 00:11:22.351 "flush": true, 00:11:22.351 "reset": true, 00:11:22.351 "nvme_admin": false, 00:11:22.351 "nvme_io": false, 00:11:22.351 "nvme_io_md": false, 00:11:22.351 "write_zeroes": true, 00:11:22.351 "zcopy": true, 00:11:22.351 "get_zone_info": false, 00:11:22.351 "zone_management": false, 00:11:22.351 "zone_append": false, 00:11:22.351 "compare": false, 00:11:22.351 "compare_and_write": false, 00:11:22.351 "abort": true, 00:11:22.351 "seek_hole": false, 00:11:22.351 "seek_data": false, 00:11:22.351 "copy": true, 00:11:22.351 "nvme_iov_md": false 00:11:22.351 }, 00:11:22.351 "memory_domains": [ 00:11:22.351 { 00:11:22.351 "dma_device_id": "system", 00:11:22.351 "dma_device_type": 1 00:11:22.351 }, 00:11:22.351 { 00:11:22.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.351 "dma_device_type": 2 00:11:22.351 } 00:11:22.351 ], 00:11:22.351 "driver_specific": {} 00:11:22.351 } 00:11:22.351 ] 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.351 [2024-12-06 09:48:47.592164] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:22.351 [2024-12-06 09:48:47.592214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:22.351 [2024-12-06 09:48:47.592233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:22.351 [2024-12-06 09:48:47.594050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:22.351 [2024-12-06 09:48:47.594100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.351 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.613 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.613 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.613 "name": "Existed_Raid", 00:11:22.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.613 "strip_size_kb": 0, 00:11:22.613 "state": "configuring", 00:11:22.613 "raid_level": "raid1", 00:11:22.613 "superblock": false, 00:11:22.613 "num_base_bdevs": 4, 00:11:22.613 "num_base_bdevs_discovered": 3, 00:11:22.613 "num_base_bdevs_operational": 4, 00:11:22.613 "base_bdevs_list": [ 00:11:22.613 { 00:11:22.613 "name": "BaseBdev1", 00:11:22.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.613 "is_configured": false, 00:11:22.613 "data_offset": 0, 00:11:22.613 "data_size": 0 00:11:22.613 }, 00:11:22.613 { 00:11:22.613 "name": "BaseBdev2", 00:11:22.613 "uuid": "763ff163-09e8-44ce-a2d6-8b55075f5cda", 00:11:22.613 "is_configured": true, 00:11:22.613 "data_offset": 0, 00:11:22.613 "data_size": 65536 00:11:22.613 }, 00:11:22.613 { 00:11:22.613 "name": "BaseBdev3", 00:11:22.613 "uuid": "623df6b1-b2ae-468c-836d-c60fef7fb6ed", 00:11:22.613 "is_configured": true, 00:11:22.613 "data_offset": 0, 00:11:22.613 "data_size": 65536 00:11:22.613 }, 00:11:22.613 { 00:11:22.613 "name": "BaseBdev4", 00:11:22.613 "uuid": "94b3eb5b-b5ba-46ae-9c61-194ddb9cae9b", 00:11:22.613 "is_configured": true, 00:11:22.613 "data_offset": 0, 00:11:22.613 "data_size": 65536 00:11:22.613 } 00:11:22.613 ] 00:11:22.613 }' 00:11:22.613 09:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.613 09:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.871 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:22.871 09:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.871 09:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.871 [2024-12-06 09:48:48.019453] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:22.871 09:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.871 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:22.871 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.871 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.871 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.871 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.871 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.871 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.871 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.871 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.871 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.871 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.871 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.871 09:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.871 09:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.871 09:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.871 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.871 "name": "Existed_Raid", 00:11:22.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.871 "strip_size_kb": 0, 00:11:22.871 "state": "configuring", 00:11:22.871 "raid_level": "raid1", 00:11:22.871 "superblock": false, 00:11:22.871 "num_base_bdevs": 4, 00:11:22.871 "num_base_bdevs_discovered": 2, 00:11:22.871 "num_base_bdevs_operational": 4, 00:11:22.871 "base_bdevs_list": [ 00:11:22.871 { 00:11:22.871 "name": "BaseBdev1", 00:11:22.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.871 "is_configured": false, 00:11:22.871 "data_offset": 0, 00:11:22.871 "data_size": 0 00:11:22.871 }, 00:11:22.871 { 00:11:22.871 "name": null, 00:11:22.871 "uuid": "763ff163-09e8-44ce-a2d6-8b55075f5cda", 00:11:22.871 "is_configured": false, 00:11:22.871 "data_offset": 0, 00:11:22.871 "data_size": 65536 00:11:22.871 }, 00:11:22.871 { 00:11:22.871 "name": "BaseBdev3", 00:11:22.871 "uuid": "623df6b1-b2ae-468c-836d-c60fef7fb6ed", 00:11:22.871 "is_configured": true, 00:11:22.871 "data_offset": 0, 00:11:22.871 "data_size": 65536 00:11:22.871 }, 00:11:22.871 { 00:11:22.871 "name": "BaseBdev4", 00:11:22.871 "uuid": "94b3eb5b-b5ba-46ae-9c61-194ddb9cae9b", 00:11:22.871 "is_configured": true, 00:11:22.871 "data_offset": 0, 00:11:22.871 "data_size": 65536 00:11:22.871 } 00:11:22.871 ] 00:11:22.871 }' 00:11:22.871 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.871 09:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.438 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.438 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:23.438 09:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.438 09:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.438 09:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.438 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:23.438 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:23.438 09:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.438 09:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.438 [2024-12-06 09:48:48.527838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:23.438 BaseBdev1 00:11:23.438 09:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.438 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:23.438 09:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:23.438 09:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:23.438 09:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:23.438 09:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:23.438 09:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:23.438 09:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:23.438 09:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.438 09:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.438 09:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.438 09:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:23.438 09:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.438 09:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.438 [ 00:11:23.438 { 00:11:23.438 "name": "BaseBdev1", 00:11:23.438 "aliases": [ 00:11:23.438 "4ab5a8fe-3c7b-41ed-8391-ac99d83a6081" 00:11:23.438 ], 00:11:23.438 "product_name": "Malloc disk", 00:11:23.438 "block_size": 512, 00:11:23.438 "num_blocks": 65536, 00:11:23.438 "uuid": "4ab5a8fe-3c7b-41ed-8391-ac99d83a6081", 00:11:23.438 "assigned_rate_limits": { 00:11:23.438 "rw_ios_per_sec": 0, 00:11:23.438 "rw_mbytes_per_sec": 0, 00:11:23.438 "r_mbytes_per_sec": 0, 00:11:23.438 "w_mbytes_per_sec": 0 00:11:23.438 }, 00:11:23.438 "claimed": true, 00:11:23.438 "claim_type": "exclusive_write", 00:11:23.438 "zoned": false, 00:11:23.438 "supported_io_types": { 00:11:23.438 "read": true, 00:11:23.438 "write": true, 00:11:23.438 "unmap": true, 00:11:23.438 "flush": true, 00:11:23.438 "reset": true, 00:11:23.438 "nvme_admin": false, 00:11:23.438 "nvme_io": false, 00:11:23.438 "nvme_io_md": false, 00:11:23.438 "write_zeroes": true, 00:11:23.438 "zcopy": true, 00:11:23.438 "get_zone_info": false, 00:11:23.438 "zone_management": false, 00:11:23.438 "zone_append": false, 00:11:23.438 "compare": false, 00:11:23.438 "compare_and_write": false, 00:11:23.438 "abort": true, 00:11:23.438 "seek_hole": false, 00:11:23.438 "seek_data": false, 00:11:23.438 "copy": true, 00:11:23.438 "nvme_iov_md": false 00:11:23.438 }, 00:11:23.438 "memory_domains": [ 00:11:23.438 { 00:11:23.438 "dma_device_id": "system", 00:11:23.438 "dma_device_type": 1 00:11:23.438 }, 00:11:23.438 { 00:11:23.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.438 "dma_device_type": 2 00:11:23.438 } 00:11:23.438 ], 00:11:23.438 "driver_specific": {} 00:11:23.438 } 00:11:23.438 ] 00:11:23.438 09:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.438 09:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:23.438 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:23.439 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.439 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.439 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.439 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.439 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.439 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.439 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.439 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.439 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.439 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.439 09:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.439 09:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.439 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.439 09:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.439 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.439 "name": "Existed_Raid", 00:11:23.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.439 "strip_size_kb": 0, 00:11:23.439 "state": "configuring", 00:11:23.439 "raid_level": "raid1", 00:11:23.439 "superblock": false, 00:11:23.439 "num_base_bdevs": 4, 00:11:23.439 "num_base_bdevs_discovered": 3, 00:11:23.439 "num_base_bdevs_operational": 4, 00:11:23.439 "base_bdevs_list": [ 00:11:23.439 { 00:11:23.439 "name": "BaseBdev1", 00:11:23.439 "uuid": "4ab5a8fe-3c7b-41ed-8391-ac99d83a6081", 00:11:23.439 "is_configured": true, 00:11:23.439 "data_offset": 0, 00:11:23.439 "data_size": 65536 00:11:23.439 }, 00:11:23.439 { 00:11:23.439 "name": null, 00:11:23.439 "uuid": "763ff163-09e8-44ce-a2d6-8b55075f5cda", 00:11:23.439 "is_configured": false, 00:11:23.439 "data_offset": 0, 00:11:23.439 "data_size": 65536 00:11:23.439 }, 00:11:23.439 { 00:11:23.439 "name": "BaseBdev3", 00:11:23.439 "uuid": "623df6b1-b2ae-468c-836d-c60fef7fb6ed", 00:11:23.439 "is_configured": true, 00:11:23.439 "data_offset": 0, 00:11:23.439 "data_size": 65536 00:11:23.439 }, 00:11:23.439 { 00:11:23.439 "name": "BaseBdev4", 00:11:23.439 "uuid": "94b3eb5b-b5ba-46ae-9c61-194ddb9cae9b", 00:11:23.439 "is_configured": true, 00:11:23.439 "data_offset": 0, 00:11:23.439 "data_size": 65536 00:11:23.439 } 00:11:23.439 ] 00:11:23.439 }' 00:11:23.439 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.439 09:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.007 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.007 09:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.007 09:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:24.007 09:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.007 09:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.007 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:24.007 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:24.007 09:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.007 09:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.007 [2024-12-06 09:48:49.039040] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:24.007 09:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.007 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:24.007 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.007 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.007 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.007 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.007 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.007 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.007 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.007 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.007 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.007 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.007 09:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.007 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.007 09:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.007 09:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.008 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.008 "name": "Existed_Raid", 00:11:24.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.008 "strip_size_kb": 0, 00:11:24.008 "state": "configuring", 00:11:24.008 "raid_level": "raid1", 00:11:24.008 "superblock": false, 00:11:24.008 "num_base_bdevs": 4, 00:11:24.008 "num_base_bdevs_discovered": 2, 00:11:24.008 "num_base_bdevs_operational": 4, 00:11:24.008 "base_bdevs_list": [ 00:11:24.008 { 00:11:24.008 "name": "BaseBdev1", 00:11:24.008 "uuid": "4ab5a8fe-3c7b-41ed-8391-ac99d83a6081", 00:11:24.008 "is_configured": true, 00:11:24.008 "data_offset": 0, 00:11:24.008 "data_size": 65536 00:11:24.008 }, 00:11:24.008 { 00:11:24.008 "name": null, 00:11:24.008 "uuid": "763ff163-09e8-44ce-a2d6-8b55075f5cda", 00:11:24.008 "is_configured": false, 00:11:24.008 "data_offset": 0, 00:11:24.008 "data_size": 65536 00:11:24.008 }, 00:11:24.008 { 00:11:24.008 "name": null, 00:11:24.008 "uuid": "623df6b1-b2ae-468c-836d-c60fef7fb6ed", 00:11:24.008 "is_configured": false, 00:11:24.008 "data_offset": 0, 00:11:24.008 "data_size": 65536 00:11:24.008 }, 00:11:24.008 { 00:11:24.008 "name": "BaseBdev4", 00:11:24.008 "uuid": "94b3eb5b-b5ba-46ae-9c61-194ddb9cae9b", 00:11:24.008 "is_configured": true, 00:11:24.008 "data_offset": 0, 00:11:24.008 "data_size": 65536 00:11:24.008 } 00:11:24.008 ] 00:11:24.008 }' 00:11:24.008 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.008 09:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.267 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:24.267 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.267 09:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.267 09:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.267 09:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.267 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:24.267 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:24.267 09:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.267 09:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.267 [2024-12-06 09:48:49.538194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:24.526 09:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.526 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:24.526 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.526 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.526 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.526 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.526 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.526 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.526 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.526 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.526 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.526 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.526 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.526 09:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.526 09:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.526 09:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.526 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.526 "name": "Existed_Raid", 00:11:24.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.526 "strip_size_kb": 0, 00:11:24.526 "state": "configuring", 00:11:24.526 "raid_level": "raid1", 00:11:24.526 "superblock": false, 00:11:24.526 "num_base_bdevs": 4, 00:11:24.526 "num_base_bdevs_discovered": 3, 00:11:24.526 "num_base_bdevs_operational": 4, 00:11:24.526 "base_bdevs_list": [ 00:11:24.526 { 00:11:24.526 "name": "BaseBdev1", 00:11:24.526 "uuid": "4ab5a8fe-3c7b-41ed-8391-ac99d83a6081", 00:11:24.526 "is_configured": true, 00:11:24.526 "data_offset": 0, 00:11:24.526 "data_size": 65536 00:11:24.526 }, 00:11:24.526 { 00:11:24.526 "name": null, 00:11:24.526 "uuid": "763ff163-09e8-44ce-a2d6-8b55075f5cda", 00:11:24.526 "is_configured": false, 00:11:24.526 "data_offset": 0, 00:11:24.526 "data_size": 65536 00:11:24.526 }, 00:11:24.526 { 00:11:24.526 "name": "BaseBdev3", 00:11:24.526 "uuid": "623df6b1-b2ae-468c-836d-c60fef7fb6ed", 00:11:24.526 "is_configured": true, 00:11:24.526 "data_offset": 0, 00:11:24.526 "data_size": 65536 00:11:24.526 }, 00:11:24.526 { 00:11:24.526 "name": "BaseBdev4", 00:11:24.526 "uuid": "94b3eb5b-b5ba-46ae-9c61-194ddb9cae9b", 00:11:24.526 "is_configured": true, 00:11:24.526 "data_offset": 0, 00:11:24.526 "data_size": 65536 00:11:24.526 } 00:11:24.526 ] 00:11:24.526 }' 00:11:24.526 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.526 09:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.785 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:24.785 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.785 09:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.785 09:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.785 09:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.785 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:24.785 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:24.785 09:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.786 09:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.786 [2024-12-06 09:48:49.997442] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:25.046 09:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.046 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:25.046 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.046 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.046 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.046 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.046 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.046 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.046 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.046 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.046 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.046 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.046 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.046 09:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.046 09:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.046 09:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.046 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.046 "name": "Existed_Raid", 00:11:25.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.046 "strip_size_kb": 0, 00:11:25.046 "state": "configuring", 00:11:25.046 "raid_level": "raid1", 00:11:25.046 "superblock": false, 00:11:25.046 "num_base_bdevs": 4, 00:11:25.046 "num_base_bdevs_discovered": 2, 00:11:25.046 "num_base_bdevs_operational": 4, 00:11:25.046 "base_bdevs_list": [ 00:11:25.046 { 00:11:25.046 "name": null, 00:11:25.046 "uuid": "4ab5a8fe-3c7b-41ed-8391-ac99d83a6081", 00:11:25.046 "is_configured": false, 00:11:25.046 "data_offset": 0, 00:11:25.046 "data_size": 65536 00:11:25.046 }, 00:11:25.046 { 00:11:25.046 "name": null, 00:11:25.046 "uuid": "763ff163-09e8-44ce-a2d6-8b55075f5cda", 00:11:25.046 "is_configured": false, 00:11:25.046 "data_offset": 0, 00:11:25.046 "data_size": 65536 00:11:25.046 }, 00:11:25.046 { 00:11:25.046 "name": "BaseBdev3", 00:11:25.046 "uuid": "623df6b1-b2ae-468c-836d-c60fef7fb6ed", 00:11:25.046 "is_configured": true, 00:11:25.046 "data_offset": 0, 00:11:25.046 "data_size": 65536 00:11:25.046 }, 00:11:25.046 { 00:11:25.046 "name": "BaseBdev4", 00:11:25.046 "uuid": "94b3eb5b-b5ba-46ae-9c61-194ddb9cae9b", 00:11:25.046 "is_configured": true, 00:11:25.046 "data_offset": 0, 00:11:25.046 "data_size": 65536 00:11:25.046 } 00:11:25.046 ] 00:11:25.046 }' 00:11:25.046 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.046 09:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.305 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.305 09:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.305 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:25.305 09:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.305 09:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.305 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:25.305 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:25.305 09:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.305 09:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.305 [2024-12-06 09:48:50.567902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:25.306 09:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.306 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:25.306 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.306 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.306 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.306 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.306 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.306 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.306 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.306 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.306 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.564 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.564 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.564 09:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.564 09:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.565 09:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.565 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.565 "name": "Existed_Raid", 00:11:25.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.565 "strip_size_kb": 0, 00:11:25.565 "state": "configuring", 00:11:25.565 "raid_level": "raid1", 00:11:25.565 "superblock": false, 00:11:25.565 "num_base_bdevs": 4, 00:11:25.565 "num_base_bdevs_discovered": 3, 00:11:25.565 "num_base_bdevs_operational": 4, 00:11:25.565 "base_bdevs_list": [ 00:11:25.565 { 00:11:25.565 "name": null, 00:11:25.565 "uuid": "4ab5a8fe-3c7b-41ed-8391-ac99d83a6081", 00:11:25.565 "is_configured": false, 00:11:25.565 "data_offset": 0, 00:11:25.565 "data_size": 65536 00:11:25.565 }, 00:11:25.565 { 00:11:25.565 "name": "BaseBdev2", 00:11:25.565 "uuid": "763ff163-09e8-44ce-a2d6-8b55075f5cda", 00:11:25.565 "is_configured": true, 00:11:25.565 "data_offset": 0, 00:11:25.565 "data_size": 65536 00:11:25.565 }, 00:11:25.565 { 00:11:25.565 "name": "BaseBdev3", 00:11:25.565 "uuid": "623df6b1-b2ae-468c-836d-c60fef7fb6ed", 00:11:25.565 "is_configured": true, 00:11:25.565 "data_offset": 0, 00:11:25.565 "data_size": 65536 00:11:25.565 }, 00:11:25.565 { 00:11:25.565 "name": "BaseBdev4", 00:11:25.565 "uuid": "94b3eb5b-b5ba-46ae-9c61-194ddb9cae9b", 00:11:25.565 "is_configured": true, 00:11:25.565 "data_offset": 0, 00:11:25.565 "data_size": 65536 00:11:25.565 } 00:11:25.565 ] 00:11:25.565 }' 00:11:25.565 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.565 09:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.827 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:25.827 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.827 09:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.827 09:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.827 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.827 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:25.827 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:25.827 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.827 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.827 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.827 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.827 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4ab5a8fe-3c7b-41ed-8391-ac99d83a6081 00:11:25.827 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.827 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.827 [2024-12-06 09:48:51.087259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:25.827 [2024-12-06 09:48:51.087310] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:25.827 [2024-12-06 09:48:51.087319] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:25.827 [2024-12-06 09:48:51.087575] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:25.827 [2024-12-06 09:48:51.087758] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:25.827 [2024-12-06 09:48:51.087779] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:25.827 [2024-12-06 09:48:51.088072] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.827 NewBaseBdev 00:11:25.827 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.827 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:25.827 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:25.827 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:25.827 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:25.827 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:25.827 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:25.827 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:25.827 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.827 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.092 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.092 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:26.092 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.092 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.092 [ 00:11:26.092 { 00:11:26.092 "name": "NewBaseBdev", 00:11:26.092 "aliases": [ 00:11:26.092 "4ab5a8fe-3c7b-41ed-8391-ac99d83a6081" 00:11:26.092 ], 00:11:26.092 "product_name": "Malloc disk", 00:11:26.092 "block_size": 512, 00:11:26.092 "num_blocks": 65536, 00:11:26.092 "uuid": "4ab5a8fe-3c7b-41ed-8391-ac99d83a6081", 00:11:26.092 "assigned_rate_limits": { 00:11:26.092 "rw_ios_per_sec": 0, 00:11:26.092 "rw_mbytes_per_sec": 0, 00:11:26.092 "r_mbytes_per_sec": 0, 00:11:26.092 "w_mbytes_per_sec": 0 00:11:26.092 }, 00:11:26.092 "claimed": true, 00:11:26.092 "claim_type": "exclusive_write", 00:11:26.092 "zoned": false, 00:11:26.092 "supported_io_types": { 00:11:26.092 "read": true, 00:11:26.092 "write": true, 00:11:26.092 "unmap": true, 00:11:26.092 "flush": true, 00:11:26.092 "reset": true, 00:11:26.092 "nvme_admin": false, 00:11:26.092 "nvme_io": false, 00:11:26.092 "nvme_io_md": false, 00:11:26.092 "write_zeroes": true, 00:11:26.092 "zcopy": true, 00:11:26.092 "get_zone_info": false, 00:11:26.092 "zone_management": false, 00:11:26.092 "zone_append": false, 00:11:26.092 "compare": false, 00:11:26.092 "compare_and_write": false, 00:11:26.092 "abort": true, 00:11:26.092 "seek_hole": false, 00:11:26.092 "seek_data": false, 00:11:26.092 "copy": true, 00:11:26.092 "nvme_iov_md": false 00:11:26.092 }, 00:11:26.092 "memory_domains": [ 00:11:26.092 { 00:11:26.092 "dma_device_id": "system", 00:11:26.092 "dma_device_type": 1 00:11:26.092 }, 00:11:26.092 { 00:11:26.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.092 "dma_device_type": 2 00:11:26.092 } 00:11:26.092 ], 00:11:26.092 "driver_specific": {} 00:11:26.092 } 00:11:26.092 ] 00:11:26.092 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.092 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:26.092 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:26.092 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.092 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.092 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.092 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.092 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.092 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.092 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.092 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.092 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.092 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.092 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.092 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.092 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.092 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.092 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.092 "name": "Existed_Raid", 00:11:26.092 "uuid": "de21be8a-9abf-4ca5-b4c7-78bce4961154", 00:11:26.092 "strip_size_kb": 0, 00:11:26.092 "state": "online", 00:11:26.092 "raid_level": "raid1", 00:11:26.092 "superblock": false, 00:11:26.092 "num_base_bdevs": 4, 00:11:26.092 "num_base_bdevs_discovered": 4, 00:11:26.092 "num_base_bdevs_operational": 4, 00:11:26.092 "base_bdevs_list": [ 00:11:26.092 { 00:11:26.092 "name": "NewBaseBdev", 00:11:26.092 "uuid": "4ab5a8fe-3c7b-41ed-8391-ac99d83a6081", 00:11:26.092 "is_configured": true, 00:11:26.092 "data_offset": 0, 00:11:26.092 "data_size": 65536 00:11:26.092 }, 00:11:26.092 { 00:11:26.092 "name": "BaseBdev2", 00:11:26.092 "uuid": "763ff163-09e8-44ce-a2d6-8b55075f5cda", 00:11:26.092 "is_configured": true, 00:11:26.092 "data_offset": 0, 00:11:26.092 "data_size": 65536 00:11:26.092 }, 00:11:26.092 { 00:11:26.092 "name": "BaseBdev3", 00:11:26.092 "uuid": "623df6b1-b2ae-468c-836d-c60fef7fb6ed", 00:11:26.092 "is_configured": true, 00:11:26.092 "data_offset": 0, 00:11:26.092 "data_size": 65536 00:11:26.092 }, 00:11:26.092 { 00:11:26.092 "name": "BaseBdev4", 00:11:26.092 "uuid": "94b3eb5b-b5ba-46ae-9c61-194ddb9cae9b", 00:11:26.092 "is_configured": true, 00:11:26.092 "data_offset": 0, 00:11:26.092 "data_size": 65536 00:11:26.092 } 00:11:26.092 ] 00:11:26.092 }' 00:11:26.092 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.092 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.350 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:26.350 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:26.350 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:26.350 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:26.350 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:26.350 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:26.350 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:26.350 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:26.350 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.350 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.350 [2024-12-06 09:48:51.534906] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:26.350 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.350 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:26.350 "name": "Existed_Raid", 00:11:26.350 "aliases": [ 00:11:26.350 "de21be8a-9abf-4ca5-b4c7-78bce4961154" 00:11:26.350 ], 00:11:26.350 "product_name": "Raid Volume", 00:11:26.350 "block_size": 512, 00:11:26.350 "num_blocks": 65536, 00:11:26.350 "uuid": "de21be8a-9abf-4ca5-b4c7-78bce4961154", 00:11:26.350 "assigned_rate_limits": { 00:11:26.350 "rw_ios_per_sec": 0, 00:11:26.350 "rw_mbytes_per_sec": 0, 00:11:26.350 "r_mbytes_per_sec": 0, 00:11:26.350 "w_mbytes_per_sec": 0 00:11:26.350 }, 00:11:26.350 "claimed": false, 00:11:26.350 "zoned": false, 00:11:26.350 "supported_io_types": { 00:11:26.350 "read": true, 00:11:26.350 "write": true, 00:11:26.350 "unmap": false, 00:11:26.350 "flush": false, 00:11:26.350 "reset": true, 00:11:26.350 "nvme_admin": false, 00:11:26.350 "nvme_io": false, 00:11:26.350 "nvme_io_md": false, 00:11:26.350 "write_zeroes": true, 00:11:26.350 "zcopy": false, 00:11:26.350 "get_zone_info": false, 00:11:26.350 "zone_management": false, 00:11:26.351 "zone_append": false, 00:11:26.351 "compare": false, 00:11:26.351 "compare_and_write": false, 00:11:26.351 "abort": false, 00:11:26.351 "seek_hole": false, 00:11:26.351 "seek_data": false, 00:11:26.351 "copy": false, 00:11:26.351 "nvme_iov_md": false 00:11:26.351 }, 00:11:26.351 "memory_domains": [ 00:11:26.351 { 00:11:26.351 "dma_device_id": "system", 00:11:26.351 "dma_device_type": 1 00:11:26.351 }, 00:11:26.351 { 00:11:26.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.351 "dma_device_type": 2 00:11:26.351 }, 00:11:26.351 { 00:11:26.351 "dma_device_id": "system", 00:11:26.351 "dma_device_type": 1 00:11:26.351 }, 00:11:26.351 { 00:11:26.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.351 "dma_device_type": 2 00:11:26.351 }, 00:11:26.351 { 00:11:26.351 "dma_device_id": "system", 00:11:26.351 "dma_device_type": 1 00:11:26.351 }, 00:11:26.351 { 00:11:26.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.351 "dma_device_type": 2 00:11:26.351 }, 00:11:26.351 { 00:11:26.351 "dma_device_id": "system", 00:11:26.351 "dma_device_type": 1 00:11:26.351 }, 00:11:26.351 { 00:11:26.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.351 "dma_device_type": 2 00:11:26.351 } 00:11:26.351 ], 00:11:26.351 "driver_specific": { 00:11:26.351 "raid": { 00:11:26.351 "uuid": "de21be8a-9abf-4ca5-b4c7-78bce4961154", 00:11:26.351 "strip_size_kb": 0, 00:11:26.351 "state": "online", 00:11:26.351 "raid_level": "raid1", 00:11:26.351 "superblock": false, 00:11:26.351 "num_base_bdevs": 4, 00:11:26.351 "num_base_bdevs_discovered": 4, 00:11:26.351 "num_base_bdevs_operational": 4, 00:11:26.351 "base_bdevs_list": [ 00:11:26.351 { 00:11:26.351 "name": "NewBaseBdev", 00:11:26.351 "uuid": "4ab5a8fe-3c7b-41ed-8391-ac99d83a6081", 00:11:26.351 "is_configured": true, 00:11:26.351 "data_offset": 0, 00:11:26.351 "data_size": 65536 00:11:26.351 }, 00:11:26.351 { 00:11:26.351 "name": "BaseBdev2", 00:11:26.351 "uuid": "763ff163-09e8-44ce-a2d6-8b55075f5cda", 00:11:26.351 "is_configured": true, 00:11:26.351 "data_offset": 0, 00:11:26.351 "data_size": 65536 00:11:26.351 }, 00:11:26.351 { 00:11:26.351 "name": "BaseBdev3", 00:11:26.351 "uuid": "623df6b1-b2ae-468c-836d-c60fef7fb6ed", 00:11:26.351 "is_configured": true, 00:11:26.351 "data_offset": 0, 00:11:26.351 "data_size": 65536 00:11:26.351 }, 00:11:26.351 { 00:11:26.351 "name": "BaseBdev4", 00:11:26.351 "uuid": "94b3eb5b-b5ba-46ae-9c61-194ddb9cae9b", 00:11:26.351 "is_configured": true, 00:11:26.351 "data_offset": 0, 00:11:26.351 "data_size": 65536 00:11:26.351 } 00:11:26.351 ] 00:11:26.351 } 00:11:26.351 } 00:11:26.351 }' 00:11:26.351 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:26.351 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:26.351 BaseBdev2 00:11:26.351 BaseBdev3 00:11:26.351 BaseBdev4' 00:11:26.351 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.610 [2024-12-06 09:48:51.794094] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:26.610 [2024-12-06 09:48:51.794128] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:26.610 [2024-12-06 09:48:51.794222] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:26.610 [2024-12-06 09:48:51.794520] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:26.610 [2024-12-06 09:48:51.794541] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73108 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73108 ']' 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73108 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73108 00:11:26.610 killing process with pid 73108 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73108' 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73108 00:11:26.610 [2024-12-06 09:48:51.840176] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:26.610 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73108 00:11:27.178 [2024-12-06 09:48:52.233859] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:28.117 09:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:28.117 00:11:28.117 real 0m11.286s 00:11:28.117 user 0m17.958s 00:11:28.117 sys 0m1.969s 00:11:28.117 09:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.117 09:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.117 ************************************ 00:11:28.117 END TEST raid_state_function_test 00:11:28.117 ************************************ 00:11:28.377 09:48:53 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:11:28.377 09:48:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:28.377 09:48:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.377 09:48:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:28.377 ************************************ 00:11:28.377 START TEST raid_state_function_test_sb 00:11:28.377 ************************************ 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73779 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73779' 00:11:28.377 Process raid pid: 73779 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73779 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73779 ']' 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:28.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.377 09:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.378 09:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:28.378 09:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.378 [2024-12-06 09:48:53.517632] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:11:28.378 [2024-12-06 09:48:53.517747] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.638 [2024-12-06 09:48:53.692349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.638 [2024-12-06 09:48:53.807988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.898 [2024-12-06 09:48:54.012198] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:28.898 [2024-12-06 09:48:54.012243] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:29.158 09:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:29.158 09:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:29.158 09:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:29.158 09:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.158 09:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.158 [2024-12-06 09:48:54.395607] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:29.158 [2024-12-06 09:48:54.395664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:29.158 [2024-12-06 09:48:54.395679] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:29.158 [2024-12-06 09:48:54.395689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:29.158 [2024-12-06 09:48:54.395695] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:29.158 [2024-12-06 09:48:54.395704] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:29.158 [2024-12-06 09:48:54.395710] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:29.158 [2024-12-06 09:48:54.395719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:29.158 09:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.158 09:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:29.158 09:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.158 09:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.158 09:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.158 09:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.158 09:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.158 09:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.158 09:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.158 09:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.158 09:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.158 09:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.158 09:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.158 09:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.158 09:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.158 09:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.418 09:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.418 "name": "Existed_Raid", 00:11:29.418 "uuid": "e306bbb1-2d8c-4dc4-9833-dc122ef3a9d4", 00:11:29.418 "strip_size_kb": 0, 00:11:29.418 "state": "configuring", 00:11:29.418 "raid_level": "raid1", 00:11:29.418 "superblock": true, 00:11:29.418 "num_base_bdevs": 4, 00:11:29.418 "num_base_bdevs_discovered": 0, 00:11:29.418 "num_base_bdevs_operational": 4, 00:11:29.418 "base_bdevs_list": [ 00:11:29.418 { 00:11:29.418 "name": "BaseBdev1", 00:11:29.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.418 "is_configured": false, 00:11:29.418 "data_offset": 0, 00:11:29.418 "data_size": 0 00:11:29.418 }, 00:11:29.418 { 00:11:29.418 "name": "BaseBdev2", 00:11:29.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.418 "is_configured": false, 00:11:29.418 "data_offset": 0, 00:11:29.418 "data_size": 0 00:11:29.418 }, 00:11:29.418 { 00:11:29.418 "name": "BaseBdev3", 00:11:29.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.418 "is_configured": false, 00:11:29.418 "data_offset": 0, 00:11:29.418 "data_size": 0 00:11:29.418 }, 00:11:29.418 { 00:11:29.418 "name": "BaseBdev4", 00:11:29.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.418 "is_configured": false, 00:11:29.418 "data_offset": 0, 00:11:29.418 "data_size": 0 00:11:29.418 } 00:11:29.418 ] 00:11:29.418 }' 00:11:29.418 09:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.418 09:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.678 09:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:29.678 09:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.678 09:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.678 [2024-12-06 09:48:54.866747] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:29.678 [2024-12-06 09:48:54.866795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:29.678 09:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.678 09:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:29.678 09:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.678 09:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.678 [2024-12-06 09:48:54.878727] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:29.678 [2024-12-06 09:48:54.878787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:29.678 [2024-12-06 09:48:54.878796] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:29.678 [2024-12-06 09:48:54.878806] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:29.678 [2024-12-06 09:48:54.878812] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:29.678 [2024-12-06 09:48:54.878821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:29.678 [2024-12-06 09:48:54.878827] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:29.678 [2024-12-06 09:48:54.878835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:29.678 09:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.678 09:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:29.678 09:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.678 09:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.678 [2024-12-06 09:48:54.926415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:29.678 BaseBdev1 00:11:29.678 09:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.678 09:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:29.678 09:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:29.678 09:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:29.678 09:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:29.678 09:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:29.678 09:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:29.678 09:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:29.678 09:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.678 09:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.678 09:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.678 09:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:29.678 09:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.678 09:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.939 [ 00:11:29.939 { 00:11:29.939 "name": "BaseBdev1", 00:11:29.939 "aliases": [ 00:11:29.939 "8e3c9acc-5950-40fe-a0bf-7a4573de490e" 00:11:29.939 ], 00:11:29.939 "product_name": "Malloc disk", 00:11:29.939 "block_size": 512, 00:11:29.939 "num_blocks": 65536, 00:11:29.939 "uuid": "8e3c9acc-5950-40fe-a0bf-7a4573de490e", 00:11:29.939 "assigned_rate_limits": { 00:11:29.939 "rw_ios_per_sec": 0, 00:11:29.939 "rw_mbytes_per_sec": 0, 00:11:29.939 "r_mbytes_per_sec": 0, 00:11:29.939 "w_mbytes_per_sec": 0 00:11:29.939 }, 00:11:29.939 "claimed": true, 00:11:29.939 "claim_type": "exclusive_write", 00:11:29.939 "zoned": false, 00:11:29.939 "supported_io_types": { 00:11:29.939 "read": true, 00:11:29.939 "write": true, 00:11:29.939 "unmap": true, 00:11:29.939 "flush": true, 00:11:29.939 "reset": true, 00:11:29.939 "nvme_admin": false, 00:11:29.939 "nvme_io": false, 00:11:29.939 "nvme_io_md": false, 00:11:29.939 "write_zeroes": true, 00:11:29.939 "zcopy": true, 00:11:29.939 "get_zone_info": false, 00:11:29.939 "zone_management": false, 00:11:29.939 "zone_append": false, 00:11:29.939 "compare": false, 00:11:29.939 "compare_and_write": false, 00:11:29.939 "abort": true, 00:11:29.939 "seek_hole": false, 00:11:29.939 "seek_data": false, 00:11:29.939 "copy": true, 00:11:29.939 "nvme_iov_md": false 00:11:29.939 }, 00:11:29.939 "memory_domains": [ 00:11:29.939 { 00:11:29.939 "dma_device_id": "system", 00:11:29.939 "dma_device_type": 1 00:11:29.939 }, 00:11:29.939 { 00:11:29.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.939 "dma_device_type": 2 00:11:29.939 } 00:11:29.939 ], 00:11:29.939 "driver_specific": {} 00:11:29.939 } 00:11:29.939 ] 00:11:29.939 09:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.939 09:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:29.939 09:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:29.939 09:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.939 09:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.939 09:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.939 09:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.939 09:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.939 09:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.939 09:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.939 09:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.939 09:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.939 09:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.939 09:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.939 09:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.939 09:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.939 09:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.939 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.939 "name": "Existed_Raid", 00:11:29.939 "uuid": "792cfa88-0181-46ff-b945-0cb011f610dc", 00:11:29.939 "strip_size_kb": 0, 00:11:29.939 "state": "configuring", 00:11:29.939 "raid_level": "raid1", 00:11:29.939 "superblock": true, 00:11:29.939 "num_base_bdevs": 4, 00:11:29.939 "num_base_bdevs_discovered": 1, 00:11:29.939 "num_base_bdevs_operational": 4, 00:11:29.939 "base_bdevs_list": [ 00:11:29.939 { 00:11:29.939 "name": "BaseBdev1", 00:11:29.939 "uuid": "8e3c9acc-5950-40fe-a0bf-7a4573de490e", 00:11:29.939 "is_configured": true, 00:11:29.939 "data_offset": 2048, 00:11:29.939 "data_size": 63488 00:11:29.939 }, 00:11:29.939 { 00:11:29.939 "name": "BaseBdev2", 00:11:29.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.939 "is_configured": false, 00:11:29.939 "data_offset": 0, 00:11:29.939 "data_size": 0 00:11:29.939 }, 00:11:29.939 { 00:11:29.939 "name": "BaseBdev3", 00:11:29.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.939 "is_configured": false, 00:11:29.939 "data_offset": 0, 00:11:29.939 "data_size": 0 00:11:29.939 }, 00:11:29.939 { 00:11:29.939 "name": "BaseBdev4", 00:11:29.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.939 "is_configured": false, 00:11:29.939 "data_offset": 0, 00:11:29.939 "data_size": 0 00:11:29.939 } 00:11:29.939 ] 00:11:29.939 }' 00:11:29.939 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.939 09:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.198 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:30.198 09:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.198 09:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.198 [2024-12-06 09:48:55.397656] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:30.198 [2024-12-06 09:48:55.397717] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:30.198 09:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.198 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:30.198 09:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.198 09:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.198 [2024-12-06 09:48:55.409666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:30.198 [2024-12-06 09:48:55.411425] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:30.198 [2024-12-06 09:48:55.411484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:30.198 [2024-12-06 09:48:55.411493] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:30.198 [2024-12-06 09:48:55.411504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:30.198 [2024-12-06 09:48:55.411511] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:30.198 [2024-12-06 09:48:55.411518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:30.198 09:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.198 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:30.198 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:30.198 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:30.198 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.198 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.198 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.198 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.198 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.198 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.198 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.198 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.198 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.198 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.198 09:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.198 09:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.198 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.198 09:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.199 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.199 "name": "Existed_Raid", 00:11:30.199 "uuid": "c8f65f90-5bf2-4ccb-836b-81af62911b35", 00:11:30.199 "strip_size_kb": 0, 00:11:30.199 "state": "configuring", 00:11:30.199 "raid_level": "raid1", 00:11:30.199 "superblock": true, 00:11:30.199 "num_base_bdevs": 4, 00:11:30.199 "num_base_bdevs_discovered": 1, 00:11:30.199 "num_base_bdevs_operational": 4, 00:11:30.199 "base_bdevs_list": [ 00:11:30.199 { 00:11:30.199 "name": "BaseBdev1", 00:11:30.199 "uuid": "8e3c9acc-5950-40fe-a0bf-7a4573de490e", 00:11:30.199 "is_configured": true, 00:11:30.199 "data_offset": 2048, 00:11:30.199 "data_size": 63488 00:11:30.199 }, 00:11:30.199 { 00:11:30.199 "name": "BaseBdev2", 00:11:30.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.199 "is_configured": false, 00:11:30.199 "data_offset": 0, 00:11:30.199 "data_size": 0 00:11:30.199 }, 00:11:30.199 { 00:11:30.199 "name": "BaseBdev3", 00:11:30.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.199 "is_configured": false, 00:11:30.199 "data_offset": 0, 00:11:30.199 "data_size": 0 00:11:30.199 }, 00:11:30.199 { 00:11:30.199 "name": "BaseBdev4", 00:11:30.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.199 "is_configured": false, 00:11:30.199 "data_offset": 0, 00:11:30.199 "data_size": 0 00:11:30.199 } 00:11:30.199 ] 00:11:30.199 }' 00:11:30.199 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.199 09:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.767 [2024-12-06 09:48:55.907388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:30.767 BaseBdev2 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.767 [ 00:11:30.767 { 00:11:30.767 "name": "BaseBdev2", 00:11:30.767 "aliases": [ 00:11:30.767 "b8913a38-5383-461a-a5ff-6fbf5e39daaa" 00:11:30.767 ], 00:11:30.767 "product_name": "Malloc disk", 00:11:30.767 "block_size": 512, 00:11:30.767 "num_blocks": 65536, 00:11:30.767 "uuid": "b8913a38-5383-461a-a5ff-6fbf5e39daaa", 00:11:30.767 "assigned_rate_limits": { 00:11:30.767 "rw_ios_per_sec": 0, 00:11:30.767 "rw_mbytes_per_sec": 0, 00:11:30.767 "r_mbytes_per_sec": 0, 00:11:30.767 "w_mbytes_per_sec": 0 00:11:30.767 }, 00:11:30.767 "claimed": true, 00:11:30.767 "claim_type": "exclusive_write", 00:11:30.767 "zoned": false, 00:11:30.767 "supported_io_types": { 00:11:30.767 "read": true, 00:11:30.767 "write": true, 00:11:30.767 "unmap": true, 00:11:30.767 "flush": true, 00:11:30.767 "reset": true, 00:11:30.767 "nvme_admin": false, 00:11:30.767 "nvme_io": false, 00:11:30.767 "nvme_io_md": false, 00:11:30.767 "write_zeroes": true, 00:11:30.767 "zcopy": true, 00:11:30.767 "get_zone_info": false, 00:11:30.767 "zone_management": false, 00:11:30.767 "zone_append": false, 00:11:30.767 "compare": false, 00:11:30.767 "compare_and_write": false, 00:11:30.767 "abort": true, 00:11:30.767 "seek_hole": false, 00:11:30.767 "seek_data": false, 00:11:30.767 "copy": true, 00:11:30.767 "nvme_iov_md": false 00:11:30.767 }, 00:11:30.767 "memory_domains": [ 00:11:30.767 { 00:11:30.767 "dma_device_id": "system", 00:11:30.767 "dma_device_type": 1 00:11:30.767 }, 00:11:30.767 { 00:11:30.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.767 "dma_device_type": 2 00:11:30.767 } 00:11:30.767 ], 00:11:30.767 "driver_specific": {} 00:11:30.767 } 00:11:30.767 ] 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.767 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.767 "name": "Existed_Raid", 00:11:30.767 "uuid": "c8f65f90-5bf2-4ccb-836b-81af62911b35", 00:11:30.767 "strip_size_kb": 0, 00:11:30.767 "state": "configuring", 00:11:30.767 "raid_level": "raid1", 00:11:30.767 "superblock": true, 00:11:30.767 "num_base_bdevs": 4, 00:11:30.767 "num_base_bdevs_discovered": 2, 00:11:30.767 "num_base_bdevs_operational": 4, 00:11:30.768 "base_bdevs_list": [ 00:11:30.768 { 00:11:30.768 "name": "BaseBdev1", 00:11:30.768 "uuid": "8e3c9acc-5950-40fe-a0bf-7a4573de490e", 00:11:30.768 "is_configured": true, 00:11:30.768 "data_offset": 2048, 00:11:30.768 "data_size": 63488 00:11:30.768 }, 00:11:30.768 { 00:11:30.768 "name": "BaseBdev2", 00:11:30.768 "uuid": "b8913a38-5383-461a-a5ff-6fbf5e39daaa", 00:11:30.768 "is_configured": true, 00:11:30.768 "data_offset": 2048, 00:11:30.768 "data_size": 63488 00:11:30.768 }, 00:11:30.768 { 00:11:30.768 "name": "BaseBdev3", 00:11:30.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.768 "is_configured": false, 00:11:30.768 "data_offset": 0, 00:11:30.768 "data_size": 0 00:11:30.768 }, 00:11:30.768 { 00:11:30.768 "name": "BaseBdev4", 00:11:30.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.768 "is_configured": false, 00:11:30.768 "data_offset": 0, 00:11:30.768 "data_size": 0 00:11:30.768 } 00:11:30.768 ] 00:11:30.768 }' 00:11:30.768 09:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.768 09:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.338 [2024-12-06 09:48:56.397697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:31.338 BaseBdev3 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.338 [ 00:11:31.338 { 00:11:31.338 "name": "BaseBdev3", 00:11:31.338 "aliases": [ 00:11:31.338 "9a8bba91-ab5a-4c15-9219-4122cdf7ad8c" 00:11:31.338 ], 00:11:31.338 "product_name": "Malloc disk", 00:11:31.338 "block_size": 512, 00:11:31.338 "num_blocks": 65536, 00:11:31.338 "uuid": "9a8bba91-ab5a-4c15-9219-4122cdf7ad8c", 00:11:31.338 "assigned_rate_limits": { 00:11:31.338 "rw_ios_per_sec": 0, 00:11:31.338 "rw_mbytes_per_sec": 0, 00:11:31.338 "r_mbytes_per_sec": 0, 00:11:31.338 "w_mbytes_per_sec": 0 00:11:31.338 }, 00:11:31.338 "claimed": true, 00:11:31.338 "claim_type": "exclusive_write", 00:11:31.338 "zoned": false, 00:11:31.338 "supported_io_types": { 00:11:31.338 "read": true, 00:11:31.338 "write": true, 00:11:31.338 "unmap": true, 00:11:31.338 "flush": true, 00:11:31.338 "reset": true, 00:11:31.338 "nvme_admin": false, 00:11:31.338 "nvme_io": false, 00:11:31.338 "nvme_io_md": false, 00:11:31.338 "write_zeroes": true, 00:11:31.338 "zcopy": true, 00:11:31.338 "get_zone_info": false, 00:11:31.338 "zone_management": false, 00:11:31.338 "zone_append": false, 00:11:31.338 "compare": false, 00:11:31.338 "compare_and_write": false, 00:11:31.338 "abort": true, 00:11:31.338 "seek_hole": false, 00:11:31.338 "seek_data": false, 00:11:31.338 "copy": true, 00:11:31.338 "nvme_iov_md": false 00:11:31.338 }, 00:11:31.338 "memory_domains": [ 00:11:31.338 { 00:11:31.338 "dma_device_id": "system", 00:11:31.338 "dma_device_type": 1 00:11:31.338 }, 00:11:31.338 { 00:11:31.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.338 "dma_device_type": 2 00:11:31.338 } 00:11:31.338 ], 00:11:31.338 "driver_specific": {} 00:11:31.338 } 00:11:31.338 ] 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.338 09:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.338 "name": "Existed_Raid", 00:11:31.338 "uuid": "c8f65f90-5bf2-4ccb-836b-81af62911b35", 00:11:31.338 "strip_size_kb": 0, 00:11:31.338 "state": "configuring", 00:11:31.338 "raid_level": "raid1", 00:11:31.338 "superblock": true, 00:11:31.338 "num_base_bdevs": 4, 00:11:31.338 "num_base_bdevs_discovered": 3, 00:11:31.338 "num_base_bdevs_operational": 4, 00:11:31.338 "base_bdevs_list": [ 00:11:31.338 { 00:11:31.338 "name": "BaseBdev1", 00:11:31.338 "uuid": "8e3c9acc-5950-40fe-a0bf-7a4573de490e", 00:11:31.338 "is_configured": true, 00:11:31.338 "data_offset": 2048, 00:11:31.338 "data_size": 63488 00:11:31.338 }, 00:11:31.339 { 00:11:31.339 "name": "BaseBdev2", 00:11:31.339 "uuid": "b8913a38-5383-461a-a5ff-6fbf5e39daaa", 00:11:31.339 "is_configured": true, 00:11:31.339 "data_offset": 2048, 00:11:31.339 "data_size": 63488 00:11:31.339 }, 00:11:31.339 { 00:11:31.339 "name": "BaseBdev3", 00:11:31.339 "uuid": "9a8bba91-ab5a-4c15-9219-4122cdf7ad8c", 00:11:31.339 "is_configured": true, 00:11:31.339 "data_offset": 2048, 00:11:31.339 "data_size": 63488 00:11:31.339 }, 00:11:31.339 { 00:11:31.339 "name": "BaseBdev4", 00:11:31.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.339 "is_configured": false, 00:11:31.339 "data_offset": 0, 00:11:31.339 "data_size": 0 00:11:31.339 } 00:11:31.339 ] 00:11:31.339 }' 00:11:31.339 09:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.339 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.909 [2024-12-06 09:48:56.912106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:31.909 [2024-12-06 09:48:56.912409] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:31.909 [2024-12-06 09:48:56.912428] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:31.909 [2024-12-06 09:48:56.912698] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:31.909 [2024-12-06 09:48:56.912868] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:31.909 [2024-12-06 09:48:56.912885] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:31.909 BaseBdev4 00:11:31.909 [2024-12-06 09:48:56.913034] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.909 [ 00:11:31.909 { 00:11:31.909 "name": "BaseBdev4", 00:11:31.909 "aliases": [ 00:11:31.909 "89584e22-4d48-43dc-9c22-d8acbdaaea61" 00:11:31.909 ], 00:11:31.909 "product_name": "Malloc disk", 00:11:31.909 "block_size": 512, 00:11:31.909 "num_blocks": 65536, 00:11:31.909 "uuid": "89584e22-4d48-43dc-9c22-d8acbdaaea61", 00:11:31.909 "assigned_rate_limits": { 00:11:31.909 "rw_ios_per_sec": 0, 00:11:31.909 "rw_mbytes_per_sec": 0, 00:11:31.909 "r_mbytes_per_sec": 0, 00:11:31.909 "w_mbytes_per_sec": 0 00:11:31.909 }, 00:11:31.909 "claimed": true, 00:11:31.909 "claim_type": "exclusive_write", 00:11:31.909 "zoned": false, 00:11:31.909 "supported_io_types": { 00:11:31.909 "read": true, 00:11:31.909 "write": true, 00:11:31.909 "unmap": true, 00:11:31.909 "flush": true, 00:11:31.909 "reset": true, 00:11:31.909 "nvme_admin": false, 00:11:31.909 "nvme_io": false, 00:11:31.909 "nvme_io_md": false, 00:11:31.909 "write_zeroes": true, 00:11:31.909 "zcopy": true, 00:11:31.909 "get_zone_info": false, 00:11:31.909 "zone_management": false, 00:11:31.909 "zone_append": false, 00:11:31.909 "compare": false, 00:11:31.909 "compare_and_write": false, 00:11:31.909 "abort": true, 00:11:31.909 "seek_hole": false, 00:11:31.909 "seek_data": false, 00:11:31.909 "copy": true, 00:11:31.909 "nvme_iov_md": false 00:11:31.909 }, 00:11:31.909 "memory_domains": [ 00:11:31.909 { 00:11:31.909 "dma_device_id": "system", 00:11:31.909 "dma_device_type": 1 00:11:31.909 }, 00:11:31.909 { 00:11:31.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.909 "dma_device_type": 2 00:11:31.909 } 00:11:31.909 ], 00:11:31.909 "driver_specific": {} 00:11:31.909 } 00:11:31.909 ] 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.909 09:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.909 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.909 "name": "Existed_Raid", 00:11:31.909 "uuid": "c8f65f90-5bf2-4ccb-836b-81af62911b35", 00:11:31.909 "strip_size_kb": 0, 00:11:31.909 "state": "online", 00:11:31.909 "raid_level": "raid1", 00:11:31.909 "superblock": true, 00:11:31.909 "num_base_bdevs": 4, 00:11:31.909 "num_base_bdevs_discovered": 4, 00:11:31.909 "num_base_bdevs_operational": 4, 00:11:31.909 "base_bdevs_list": [ 00:11:31.909 { 00:11:31.909 "name": "BaseBdev1", 00:11:31.909 "uuid": "8e3c9acc-5950-40fe-a0bf-7a4573de490e", 00:11:31.909 "is_configured": true, 00:11:31.909 "data_offset": 2048, 00:11:31.909 "data_size": 63488 00:11:31.909 }, 00:11:31.909 { 00:11:31.909 "name": "BaseBdev2", 00:11:31.909 "uuid": "b8913a38-5383-461a-a5ff-6fbf5e39daaa", 00:11:31.909 "is_configured": true, 00:11:31.909 "data_offset": 2048, 00:11:31.909 "data_size": 63488 00:11:31.909 }, 00:11:31.909 { 00:11:31.909 "name": "BaseBdev3", 00:11:31.909 "uuid": "9a8bba91-ab5a-4c15-9219-4122cdf7ad8c", 00:11:31.909 "is_configured": true, 00:11:31.909 "data_offset": 2048, 00:11:31.909 "data_size": 63488 00:11:31.909 }, 00:11:31.909 { 00:11:31.909 "name": "BaseBdev4", 00:11:31.909 "uuid": "89584e22-4d48-43dc-9c22-d8acbdaaea61", 00:11:31.909 "is_configured": true, 00:11:31.909 "data_offset": 2048, 00:11:31.909 "data_size": 63488 00:11:31.909 } 00:11:31.909 ] 00:11:31.909 }' 00:11:31.909 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.909 09:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.170 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:32.170 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:32.170 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:32.170 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:32.170 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:32.170 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:32.170 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:32.170 09:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.170 09:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.170 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:32.170 [2024-12-06 09:48:57.371749] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:32.170 09:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.170 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:32.170 "name": "Existed_Raid", 00:11:32.170 "aliases": [ 00:11:32.170 "c8f65f90-5bf2-4ccb-836b-81af62911b35" 00:11:32.170 ], 00:11:32.170 "product_name": "Raid Volume", 00:11:32.170 "block_size": 512, 00:11:32.170 "num_blocks": 63488, 00:11:32.170 "uuid": "c8f65f90-5bf2-4ccb-836b-81af62911b35", 00:11:32.170 "assigned_rate_limits": { 00:11:32.170 "rw_ios_per_sec": 0, 00:11:32.170 "rw_mbytes_per_sec": 0, 00:11:32.170 "r_mbytes_per_sec": 0, 00:11:32.170 "w_mbytes_per_sec": 0 00:11:32.170 }, 00:11:32.170 "claimed": false, 00:11:32.170 "zoned": false, 00:11:32.170 "supported_io_types": { 00:11:32.170 "read": true, 00:11:32.170 "write": true, 00:11:32.170 "unmap": false, 00:11:32.170 "flush": false, 00:11:32.170 "reset": true, 00:11:32.170 "nvme_admin": false, 00:11:32.170 "nvme_io": false, 00:11:32.170 "nvme_io_md": false, 00:11:32.170 "write_zeroes": true, 00:11:32.170 "zcopy": false, 00:11:32.170 "get_zone_info": false, 00:11:32.170 "zone_management": false, 00:11:32.170 "zone_append": false, 00:11:32.170 "compare": false, 00:11:32.170 "compare_and_write": false, 00:11:32.170 "abort": false, 00:11:32.170 "seek_hole": false, 00:11:32.170 "seek_data": false, 00:11:32.170 "copy": false, 00:11:32.170 "nvme_iov_md": false 00:11:32.170 }, 00:11:32.170 "memory_domains": [ 00:11:32.170 { 00:11:32.170 "dma_device_id": "system", 00:11:32.170 "dma_device_type": 1 00:11:32.170 }, 00:11:32.170 { 00:11:32.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.170 "dma_device_type": 2 00:11:32.170 }, 00:11:32.170 { 00:11:32.170 "dma_device_id": "system", 00:11:32.170 "dma_device_type": 1 00:11:32.170 }, 00:11:32.170 { 00:11:32.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.170 "dma_device_type": 2 00:11:32.170 }, 00:11:32.170 { 00:11:32.170 "dma_device_id": "system", 00:11:32.170 "dma_device_type": 1 00:11:32.170 }, 00:11:32.170 { 00:11:32.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.170 "dma_device_type": 2 00:11:32.170 }, 00:11:32.170 { 00:11:32.170 "dma_device_id": "system", 00:11:32.170 "dma_device_type": 1 00:11:32.170 }, 00:11:32.170 { 00:11:32.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.170 "dma_device_type": 2 00:11:32.170 } 00:11:32.170 ], 00:11:32.170 "driver_specific": { 00:11:32.170 "raid": { 00:11:32.170 "uuid": "c8f65f90-5bf2-4ccb-836b-81af62911b35", 00:11:32.170 "strip_size_kb": 0, 00:11:32.170 "state": "online", 00:11:32.170 "raid_level": "raid1", 00:11:32.170 "superblock": true, 00:11:32.170 "num_base_bdevs": 4, 00:11:32.170 "num_base_bdevs_discovered": 4, 00:11:32.170 "num_base_bdevs_operational": 4, 00:11:32.170 "base_bdevs_list": [ 00:11:32.170 { 00:11:32.170 "name": "BaseBdev1", 00:11:32.170 "uuid": "8e3c9acc-5950-40fe-a0bf-7a4573de490e", 00:11:32.170 "is_configured": true, 00:11:32.170 "data_offset": 2048, 00:11:32.170 "data_size": 63488 00:11:32.170 }, 00:11:32.170 { 00:11:32.170 "name": "BaseBdev2", 00:11:32.170 "uuid": "b8913a38-5383-461a-a5ff-6fbf5e39daaa", 00:11:32.170 "is_configured": true, 00:11:32.170 "data_offset": 2048, 00:11:32.170 "data_size": 63488 00:11:32.170 }, 00:11:32.170 { 00:11:32.170 "name": "BaseBdev3", 00:11:32.170 "uuid": "9a8bba91-ab5a-4c15-9219-4122cdf7ad8c", 00:11:32.170 "is_configured": true, 00:11:32.170 "data_offset": 2048, 00:11:32.170 "data_size": 63488 00:11:32.170 }, 00:11:32.170 { 00:11:32.170 "name": "BaseBdev4", 00:11:32.170 "uuid": "89584e22-4d48-43dc-9c22-d8acbdaaea61", 00:11:32.170 "is_configured": true, 00:11:32.170 "data_offset": 2048, 00:11:32.170 "data_size": 63488 00:11:32.170 } 00:11:32.170 ] 00:11:32.170 } 00:11:32.171 } 00:11:32.171 }' 00:11:32.171 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:32.431 BaseBdev2 00:11:32.431 BaseBdev3 00:11:32.431 BaseBdev4' 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.431 09:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.431 [2024-12-06 09:48:57.607038] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:32.691 09:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.691 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:32.691 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:32.691 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:32.691 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:32.691 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:32.691 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:32.691 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.691 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:32.691 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.691 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.691 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:32.691 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.691 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.691 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.692 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.692 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.692 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.692 09:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.692 09:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.692 09:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.692 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.692 "name": "Existed_Raid", 00:11:32.692 "uuid": "c8f65f90-5bf2-4ccb-836b-81af62911b35", 00:11:32.692 "strip_size_kb": 0, 00:11:32.692 "state": "online", 00:11:32.692 "raid_level": "raid1", 00:11:32.692 "superblock": true, 00:11:32.692 "num_base_bdevs": 4, 00:11:32.692 "num_base_bdevs_discovered": 3, 00:11:32.692 "num_base_bdevs_operational": 3, 00:11:32.692 "base_bdevs_list": [ 00:11:32.692 { 00:11:32.692 "name": null, 00:11:32.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.692 "is_configured": false, 00:11:32.692 "data_offset": 0, 00:11:32.692 "data_size": 63488 00:11:32.692 }, 00:11:32.692 { 00:11:32.692 "name": "BaseBdev2", 00:11:32.692 "uuid": "b8913a38-5383-461a-a5ff-6fbf5e39daaa", 00:11:32.692 "is_configured": true, 00:11:32.692 "data_offset": 2048, 00:11:32.692 "data_size": 63488 00:11:32.692 }, 00:11:32.692 { 00:11:32.692 "name": "BaseBdev3", 00:11:32.692 "uuid": "9a8bba91-ab5a-4c15-9219-4122cdf7ad8c", 00:11:32.692 "is_configured": true, 00:11:32.692 "data_offset": 2048, 00:11:32.692 "data_size": 63488 00:11:32.692 }, 00:11:32.692 { 00:11:32.692 "name": "BaseBdev4", 00:11:32.692 "uuid": "89584e22-4d48-43dc-9c22-d8acbdaaea61", 00:11:32.692 "is_configured": true, 00:11:32.692 "data_offset": 2048, 00:11:32.692 "data_size": 63488 00:11:32.692 } 00:11:32.692 ] 00:11:32.692 }' 00:11:32.692 09:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.692 09:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.952 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:32.952 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:32.952 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.952 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:32.952 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.952 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.952 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.952 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:32.952 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:32.952 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:32.952 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.952 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.212 [2024-12-06 09:48:58.225915] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:33.212 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.212 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:33.212 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:33.212 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.212 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.212 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.212 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:33.212 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.212 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:33.212 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:33.212 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:33.212 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.212 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.212 [2024-12-06 09:48:58.380185] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:33.212 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.212 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:33.212 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:33.212 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.212 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.212 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:33.212 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.473 [2024-12-06 09:48:58.531152] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:33.473 [2024-12-06 09:48:58.531254] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:33.473 [2024-12-06 09:48:58.627318] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:33.473 [2024-12-06 09:48:58.627376] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:33.473 [2024-12-06 09:48:58.627389] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.473 BaseBdev2 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.473 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.734 [ 00:11:33.734 { 00:11:33.734 "name": "BaseBdev2", 00:11:33.734 "aliases": [ 00:11:33.734 "4d46868e-8549-41f8-9009-a3acf2ef620a" 00:11:33.734 ], 00:11:33.734 "product_name": "Malloc disk", 00:11:33.734 "block_size": 512, 00:11:33.734 "num_blocks": 65536, 00:11:33.734 "uuid": "4d46868e-8549-41f8-9009-a3acf2ef620a", 00:11:33.734 "assigned_rate_limits": { 00:11:33.734 "rw_ios_per_sec": 0, 00:11:33.734 "rw_mbytes_per_sec": 0, 00:11:33.734 "r_mbytes_per_sec": 0, 00:11:33.734 "w_mbytes_per_sec": 0 00:11:33.734 }, 00:11:33.734 "claimed": false, 00:11:33.734 "zoned": false, 00:11:33.734 "supported_io_types": { 00:11:33.734 "read": true, 00:11:33.734 "write": true, 00:11:33.734 "unmap": true, 00:11:33.734 "flush": true, 00:11:33.734 "reset": true, 00:11:33.734 "nvme_admin": false, 00:11:33.734 "nvme_io": false, 00:11:33.734 "nvme_io_md": false, 00:11:33.734 "write_zeroes": true, 00:11:33.734 "zcopy": true, 00:11:33.734 "get_zone_info": false, 00:11:33.734 "zone_management": false, 00:11:33.734 "zone_append": false, 00:11:33.734 "compare": false, 00:11:33.734 "compare_and_write": false, 00:11:33.734 "abort": true, 00:11:33.734 "seek_hole": false, 00:11:33.734 "seek_data": false, 00:11:33.734 "copy": true, 00:11:33.734 "nvme_iov_md": false 00:11:33.734 }, 00:11:33.734 "memory_domains": [ 00:11:33.734 { 00:11:33.734 "dma_device_id": "system", 00:11:33.734 "dma_device_type": 1 00:11:33.734 }, 00:11:33.734 { 00:11:33.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.734 "dma_device_type": 2 00:11:33.734 } 00:11:33.734 ], 00:11:33.734 "driver_specific": {} 00:11:33.734 } 00:11:33.734 ] 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.734 BaseBdev3 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.734 [ 00:11:33.734 { 00:11:33.734 "name": "BaseBdev3", 00:11:33.734 "aliases": [ 00:11:33.734 "8c1ac971-3444-4612-828d-62d9126392fc" 00:11:33.734 ], 00:11:33.734 "product_name": "Malloc disk", 00:11:33.734 "block_size": 512, 00:11:33.734 "num_blocks": 65536, 00:11:33.734 "uuid": "8c1ac971-3444-4612-828d-62d9126392fc", 00:11:33.734 "assigned_rate_limits": { 00:11:33.734 "rw_ios_per_sec": 0, 00:11:33.734 "rw_mbytes_per_sec": 0, 00:11:33.734 "r_mbytes_per_sec": 0, 00:11:33.734 "w_mbytes_per_sec": 0 00:11:33.734 }, 00:11:33.734 "claimed": false, 00:11:33.734 "zoned": false, 00:11:33.734 "supported_io_types": { 00:11:33.734 "read": true, 00:11:33.734 "write": true, 00:11:33.734 "unmap": true, 00:11:33.734 "flush": true, 00:11:33.734 "reset": true, 00:11:33.734 "nvme_admin": false, 00:11:33.734 "nvme_io": false, 00:11:33.734 "nvme_io_md": false, 00:11:33.734 "write_zeroes": true, 00:11:33.734 "zcopy": true, 00:11:33.734 "get_zone_info": false, 00:11:33.734 "zone_management": false, 00:11:33.734 "zone_append": false, 00:11:33.734 "compare": false, 00:11:33.734 "compare_and_write": false, 00:11:33.734 "abort": true, 00:11:33.734 "seek_hole": false, 00:11:33.734 "seek_data": false, 00:11:33.734 "copy": true, 00:11:33.734 "nvme_iov_md": false 00:11:33.734 }, 00:11:33.734 "memory_domains": [ 00:11:33.734 { 00:11:33.734 "dma_device_id": "system", 00:11:33.734 "dma_device_type": 1 00:11:33.734 }, 00:11:33.734 { 00:11:33.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.734 "dma_device_type": 2 00:11:33.734 } 00:11:33.734 ], 00:11:33.734 "driver_specific": {} 00:11:33.734 } 00:11:33.734 ] 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.734 BaseBdev4 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.734 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.734 [ 00:11:33.734 { 00:11:33.734 "name": "BaseBdev4", 00:11:33.734 "aliases": [ 00:11:33.734 "5ac3fce0-c344-46eb-ad9f-7ddb86e2b1ec" 00:11:33.734 ], 00:11:33.734 "product_name": "Malloc disk", 00:11:33.734 "block_size": 512, 00:11:33.734 "num_blocks": 65536, 00:11:33.734 "uuid": "5ac3fce0-c344-46eb-ad9f-7ddb86e2b1ec", 00:11:33.734 "assigned_rate_limits": { 00:11:33.734 "rw_ios_per_sec": 0, 00:11:33.734 "rw_mbytes_per_sec": 0, 00:11:33.734 "r_mbytes_per_sec": 0, 00:11:33.734 "w_mbytes_per_sec": 0 00:11:33.734 }, 00:11:33.734 "claimed": false, 00:11:33.734 "zoned": false, 00:11:33.734 "supported_io_types": { 00:11:33.734 "read": true, 00:11:33.734 "write": true, 00:11:33.734 "unmap": true, 00:11:33.734 "flush": true, 00:11:33.734 "reset": true, 00:11:33.734 "nvme_admin": false, 00:11:33.734 "nvme_io": false, 00:11:33.734 "nvme_io_md": false, 00:11:33.734 "write_zeroes": true, 00:11:33.734 "zcopy": true, 00:11:33.734 "get_zone_info": false, 00:11:33.734 "zone_management": false, 00:11:33.734 "zone_append": false, 00:11:33.734 "compare": false, 00:11:33.734 "compare_and_write": false, 00:11:33.734 "abort": true, 00:11:33.734 "seek_hole": false, 00:11:33.734 "seek_data": false, 00:11:33.734 "copy": true, 00:11:33.734 "nvme_iov_md": false 00:11:33.734 }, 00:11:33.735 "memory_domains": [ 00:11:33.735 { 00:11:33.735 "dma_device_id": "system", 00:11:33.735 "dma_device_type": 1 00:11:33.735 }, 00:11:33.735 { 00:11:33.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.735 "dma_device_type": 2 00:11:33.735 } 00:11:33.735 ], 00:11:33.735 "driver_specific": {} 00:11:33.735 } 00:11:33.735 ] 00:11:33.735 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.735 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:33.735 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:33.735 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:33.735 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:33.735 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.735 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.735 [2024-12-06 09:48:58.927544] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:33.735 [2024-12-06 09:48:58.927594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:33.735 [2024-12-06 09:48:58.927614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:33.735 [2024-12-06 09:48:58.929348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:33.735 [2024-12-06 09:48:58.929399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:33.735 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.735 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:33.735 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.735 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.735 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.735 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.735 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.735 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.735 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.735 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.735 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.735 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.735 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.735 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.735 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.735 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.735 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.735 "name": "Existed_Raid", 00:11:33.735 "uuid": "a59b1603-f462-4988-9bf0-fa42a2eadfd3", 00:11:33.735 "strip_size_kb": 0, 00:11:33.735 "state": "configuring", 00:11:33.735 "raid_level": "raid1", 00:11:33.735 "superblock": true, 00:11:33.735 "num_base_bdevs": 4, 00:11:33.735 "num_base_bdevs_discovered": 3, 00:11:33.735 "num_base_bdevs_operational": 4, 00:11:33.735 "base_bdevs_list": [ 00:11:33.735 { 00:11:33.735 "name": "BaseBdev1", 00:11:33.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.735 "is_configured": false, 00:11:33.735 "data_offset": 0, 00:11:33.735 "data_size": 0 00:11:33.735 }, 00:11:33.735 { 00:11:33.735 "name": "BaseBdev2", 00:11:33.735 "uuid": "4d46868e-8549-41f8-9009-a3acf2ef620a", 00:11:33.735 "is_configured": true, 00:11:33.735 "data_offset": 2048, 00:11:33.735 "data_size": 63488 00:11:33.735 }, 00:11:33.735 { 00:11:33.735 "name": "BaseBdev3", 00:11:33.735 "uuid": "8c1ac971-3444-4612-828d-62d9126392fc", 00:11:33.735 "is_configured": true, 00:11:33.735 "data_offset": 2048, 00:11:33.735 "data_size": 63488 00:11:33.735 }, 00:11:33.735 { 00:11:33.735 "name": "BaseBdev4", 00:11:33.735 "uuid": "5ac3fce0-c344-46eb-ad9f-7ddb86e2b1ec", 00:11:33.735 "is_configured": true, 00:11:33.735 "data_offset": 2048, 00:11:33.735 "data_size": 63488 00:11:33.735 } 00:11:33.735 ] 00:11:33.735 }' 00:11:33.735 09:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.735 09:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.305 09:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:34.305 09:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.305 09:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.305 [2024-12-06 09:48:59.366824] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:34.305 09:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.305 09:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:34.305 09:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.305 09:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.305 09:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.305 09:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.305 09:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.305 09:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.305 09:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.305 09:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.305 09:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.305 09:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.305 09:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.305 09:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.305 09:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.305 09:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.305 09:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.305 "name": "Existed_Raid", 00:11:34.305 "uuid": "a59b1603-f462-4988-9bf0-fa42a2eadfd3", 00:11:34.306 "strip_size_kb": 0, 00:11:34.306 "state": "configuring", 00:11:34.306 "raid_level": "raid1", 00:11:34.306 "superblock": true, 00:11:34.306 "num_base_bdevs": 4, 00:11:34.306 "num_base_bdevs_discovered": 2, 00:11:34.306 "num_base_bdevs_operational": 4, 00:11:34.306 "base_bdevs_list": [ 00:11:34.306 { 00:11:34.306 "name": "BaseBdev1", 00:11:34.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.306 "is_configured": false, 00:11:34.306 "data_offset": 0, 00:11:34.306 "data_size": 0 00:11:34.306 }, 00:11:34.306 { 00:11:34.306 "name": null, 00:11:34.306 "uuid": "4d46868e-8549-41f8-9009-a3acf2ef620a", 00:11:34.306 "is_configured": false, 00:11:34.306 "data_offset": 0, 00:11:34.306 "data_size": 63488 00:11:34.306 }, 00:11:34.306 { 00:11:34.306 "name": "BaseBdev3", 00:11:34.306 "uuid": "8c1ac971-3444-4612-828d-62d9126392fc", 00:11:34.306 "is_configured": true, 00:11:34.306 "data_offset": 2048, 00:11:34.306 "data_size": 63488 00:11:34.306 }, 00:11:34.306 { 00:11:34.306 "name": "BaseBdev4", 00:11:34.306 "uuid": "5ac3fce0-c344-46eb-ad9f-7ddb86e2b1ec", 00:11:34.306 "is_configured": true, 00:11:34.306 "data_offset": 2048, 00:11:34.306 "data_size": 63488 00:11:34.306 } 00:11:34.306 ] 00:11:34.306 }' 00:11:34.306 09:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.306 09:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.566 09:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.566 09:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:34.566 09:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.566 09:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.826 [2024-12-06 09:48:59.908276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:34.826 BaseBdev1 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.826 [ 00:11:34.826 { 00:11:34.826 "name": "BaseBdev1", 00:11:34.826 "aliases": [ 00:11:34.826 "3a1e941e-3ad7-4aff-a5d3-f82e59c2e7f5" 00:11:34.826 ], 00:11:34.826 "product_name": "Malloc disk", 00:11:34.826 "block_size": 512, 00:11:34.826 "num_blocks": 65536, 00:11:34.826 "uuid": "3a1e941e-3ad7-4aff-a5d3-f82e59c2e7f5", 00:11:34.826 "assigned_rate_limits": { 00:11:34.826 "rw_ios_per_sec": 0, 00:11:34.826 "rw_mbytes_per_sec": 0, 00:11:34.826 "r_mbytes_per_sec": 0, 00:11:34.826 "w_mbytes_per_sec": 0 00:11:34.826 }, 00:11:34.826 "claimed": true, 00:11:34.826 "claim_type": "exclusive_write", 00:11:34.826 "zoned": false, 00:11:34.826 "supported_io_types": { 00:11:34.826 "read": true, 00:11:34.826 "write": true, 00:11:34.826 "unmap": true, 00:11:34.826 "flush": true, 00:11:34.826 "reset": true, 00:11:34.826 "nvme_admin": false, 00:11:34.826 "nvme_io": false, 00:11:34.826 "nvme_io_md": false, 00:11:34.826 "write_zeroes": true, 00:11:34.826 "zcopy": true, 00:11:34.826 "get_zone_info": false, 00:11:34.826 "zone_management": false, 00:11:34.826 "zone_append": false, 00:11:34.826 "compare": false, 00:11:34.826 "compare_and_write": false, 00:11:34.826 "abort": true, 00:11:34.826 "seek_hole": false, 00:11:34.826 "seek_data": false, 00:11:34.826 "copy": true, 00:11:34.826 "nvme_iov_md": false 00:11:34.826 }, 00:11:34.826 "memory_domains": [ 00:11:34.826 { 00:11:34.826 "dma_device_id": "system", 00:11:34.826 "dma_device_type": 1 00:11:34.826 }, 00:11:34.826 { 00:11:34.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.826 "dma_device_type": 2 00:11:34.826 } 00:11:34.826 ], 00:11:34.826 "driver_specific": {} 00:11:34.826 } 00:11:34.826 ] 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.826 "name": "Existed_Raid", 00:11:34.826 "uuid": "a59b1603-f462-4988-9bf0-fa42a2eadfd3", 00:11:34.826 "strip_size_kb": 0, 00:11:34.826 "state": "configuring", 00:11:34.826 "raid_level": "raid1", 00:11:34.826 "superblock": true, 00:11:34.826 "num_base_bdevs": 4, 00:11:34.826 "num_base_bdevs_discovered": 3, 00:11:34.826 "num_base_bdevs_operational": 4, 00:11:34.826 "base_bdevs_list": [ 00:11:34.826 { 00:11:34.826 "name": "BaseBdev1", 00:11:34.826 "uuid": "3a1e941e-3ad7-4aff-a5d3-f82e59c2e7f5", 00:11:34.826 "is_configured": true, 00:11:34.826 "data_offset": 2048, 00:11:34.826 "data_size": 63488 00:11:34.826 }, 00:11:34.826 { 00:11:34.826 "name": null, 00:11:34.826 "uuid": "4d46868e-8549-41f8-9009-a3acf2ef620a", 00:11:34.826 "is_configured": false, 00:11:34.826 "data_offset": 0, 00:11:34.826 "data_size": 63488 00:11:34.826 }, 00:11:34.826 { 00:11:34.826 "name": "BaseBdev3", 00:11:34.826 "uuid": "8c1ac971-3444-4612-828d-62d9126392fc", 00:11:34.826 "is_configured": true, 00:11:34.826 "data_offset": 2048, 00:11:34.826 "data_size": 63488 00:11:34.826 }, 00:11:34.826 { 00:11:34.826 "name": "BaseBdev4", 00:11:34.826 "uuid": "5ac3fce0-c344-46eb-ad9f-7ddb86e2b1ec", 00:11:34.826 "is_configured": true, 00:11:34.826 "data_offset": 2048, 00:11:34.826 "data_size": 63488 00:11:34.826 } 00:11:34.826 ] 00:11:34.826 }' 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.826 09:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.395 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.395 09:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.395 09:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.395 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:35.395 09:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.395 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:35.395 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:35.395 09:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.395 09:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.395 [2024-12-06 09:49:00.415516] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:35.395 09:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.395 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:35.395 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.395 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.395 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.395 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.395 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.395 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.395 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.395 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.395 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.395 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.395 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.395 09:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.395 09:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.395 09:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.395 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.395 "name": "Existed_Raid", 00:11:35.395 "uuid": "a59b1603-f462-4988-9bf0-fa42a2eadfd3", 00:11:35.395 "strip_size_kb": 0, 00:11:35.395 "state": "configuring", 00:11:35.395 "raid_level": "raid1", 00:11:35.395 "superblock": true, 00:11:35.395 "num_base_bdevs": 4, 00:11:35.395 "num_base_bdevs_discovered": 2, 00:11:35.395 "num_base_bdevs_operational": 4, 00:11:35.395 "base_bdevs_list": [ 00:11:35.395 { 00:11:35.395 "name": "BaseBdev1", 00:11:35.395 "uuid": "3a1e941e-3ad7-4aff-a5d3-f82e59c2e7f5", 00:11:35.395 "is_configured": true, 00:11:35.395 "data_offset": 2048, 00:11:35.395 "data_size": 63488 00:11:35.396 }, 00:11:35.396 { 00:11:35.396 "name": null, 00:11:35.396 "uuid": "4d46868e-8549-41f8-9009-a3acf2ef620a", 00:11:35.396 "is_configured": false, 00:11:35.396 "data_offset": 0, 00:11:35.396 "data_size": 63488 00:11:35.396 }, 00:11:35.396 { 00:11:35.396 "name": null, 00:11:35.396 "uuid": "8c1ac971-3444-4612-828d-62d9126392fc", 00:11:35.396 "is_configured": false, 00:11:35.396 "data_offset": 0, 00:11:35.396 "data_size": 63488 00:11:35.396 }, 00:11:35.396 { 00:11:35.396 "name": "BaseBdev4", 00:11:35.396 "uuid": "5ac3fce0-c344-46eb-ad9f-7ddb86e2b1ec", 00:11:35.396 "is_configured": true, 00:11:35.396 "data_offset": 2048, 00:11:35.396 "data_size": 63488 00:11:35.396 } 00:11:35.396 ] 00:11:35.396 }' 00:11:35.396 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.396 09:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.654 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.654 09:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.654 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:35.654 09:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.654 09:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.654 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:35.654 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:35.655 09:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.655 09:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.655 [2024-12-06 09:49:00.858756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:35.655 09:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.655 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:35.655 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.655 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.655 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.655 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.655 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.655 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.655 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.655 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.655 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.655 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.655 09:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.655 09:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.655 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.655 09:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.655 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.655 "name": "Existed_Raid", 00:11:35.655 "uuid": "a59b1603-f462-4988-9bf0-fa42a2eadfd3", 00:11:35.655 "strip_size_kb": 0, 00:11:35.655 "state": "configuring", 00:11:35.655 "raid_level": "raid1", 00:11:35.655 "superblock": true, 00:11:35.655 "num_base_bdevs": 4, 00:11:35.655 "num_base_bdevs_discovered": 3, 00:11:35.655 "num_base_bdevs_operational": 4, 00:11:35.655 "base_bdevs_list": [ 00:11:35.655 { 00:11:35.655 "name": "BaseBdev1", 00:11:35.655 "uuid": "3a1e941e-3ad7-4aff-a5d3-f82e59c2e7f5", 00:11:35.655 "is_configured": true, 00:11:35.655 "data_offset": 2048, 00:11:35.655 "data_size": 63488 00:11:35.655 }, 00:11:35.655 { 00:11:35.655 "name": null, 00:11:35.655 "uuid": "4d46868e-8549-41f8-9009-a3acf2ef620a", 00:11:35.655 "is_configured": false, 00:11:35.655 "data_offset": 0, 00:11:35.655 "data_size": 63488 00:11:35.655 }, 00:11:35.655 { 00:11:35.655 "name": "BaseBdev3", 00:11:35.655 "uuid": "8c1ac971-3444-4612-828d-62d9126392fc", 00:11:35.655 "is_configured": true, 00:11:35.655 "data_offset": 2048, 00:11:35.655 "data_size": 63488 00:11:35.655 }, 00:11:35.655 { 00:11:35.655 "name": "BaseBdev4", 00:11:35.655 "uuid": "5ac3fce0-c344-46eb-ad9f-7ddb86e2b1ec", 00:11:35.655 "is_configured": true, 00:11:35.655 "data_offset": 2048, 00:11:35.655 "data_size": 63488 00:11:35.655 } 00:11:35.655 ] 00:11:35.655 }' 00:11:35.655 09:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.655 09:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.223 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.224 09:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.224 09:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.224 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:36.224 09:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.224 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:36.224 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:36.224 09:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.224 09:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.224 [2024-12-06 09:49:01.294090] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:36.224 09:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.224 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:36.224 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.224 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.224 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.224 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.224 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.224 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.224 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.224 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.224 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.224 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.224 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.224 09:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.224 09:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.224 09:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.224 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.224 "name": "Existed_Raid", 00:11:36.224 "uuid": "a59b1603-f462-4988-9bf0-fa42a2eadfd3", 00:11:36.224 "strip_size_kb": 0, 00:11:36.224 "state": "configuring", 00:11:36.224 "raid_level": "raid1", 00:11:36.224 "superblock": true, 00:11:36.224 "num_base_bdevs": 4, 00:11:36.224 "num_base_bdevs_discovered": 2, 00:11:36.224 "num_base_bdevs_operational": 4, 00:11:36.224 "base_bdevs_list": [ 00:11:36.224 { 00:11:36.224 "name": null, 00:11:36.224 "uuid": "3a1e941e-3ad7-4aff-a5d3-f82e59c2e7f5", 00:11:36.224 "is_configured": false, 00:11:36.224 "data_offset": 0, 00:11:36.224 "data_size": 63488 00:11:36.224 }, 00:11:36.224 { 00:11:36.224 "name": null, 00:11:36.224 "uuid": "4d46868e-8549-41f8-9009-a3acf2ef620a", 00:11:36.224 "is_configured": false, 00:11:36.224 "data_offset": 0, 00:11:36.224 "data_size": 63488 00:11:36.224 }, 00:11:36.224 { 00:11:36.224 "name": "BaseBdev3", 00:11:36.224 "uuid": "8c1ac971-3444-4612-828d-62d9126392fc", 00:11:36.224 "is_configured": true, 00:11:36.224 "data_offset": 2048, 00:11:36.224 "data_size": 63488 00:11:36.224 }, 00:11:36.224 { 00:11:36.224 "name": "BaseBdev4", 00:11:36.224 "uuid": "5ac3fce0-c344-46eb-ad9f-7ddb86e2b1ec", 00:11:36.224 "is_configured": true, 00:11:36.224 "data_offset": 2048, 00:11:36.224 "data_size": 63488 00:11:36.224 } 00:11:36.224 ] 00:11:36.224 }' 00:11:36.224 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.224 09:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.794 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.794 09:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.794 09:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.794 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:36.794 09:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.794 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:36.794 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:36.794 09:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.794 09:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.794 [2024-12-06 09:49:01.861812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:36.794 09:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.794 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:36.794 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.794 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.794 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.794 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.794 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.794 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.794 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.794 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.794 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.794 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.794 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.794 09:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.794 09:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.794 09:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.794 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.794 "name": "Existed_Raid", 00:11:36.794 "uuid": "a59b1603-f462-4988-9bf0-fa42a2eadfd3", 00:11:36.794 "strip_size_kb": 0, 00:11:36.794 "state": "configuring", 00:11:36.794 "raid_level": "raid1", 00:11:36.794 "superblock": true, 00:11:36.794 "num_base_bdevs": 4, 00:11:36.794 "num_base_bdevs_discovered": 3, 00:11:36.794 "num_base_bdevs_operational": 4, 00:11:36.794 "base_bdevs_list": [ 00:11:36.794 { 00:11:36.794 "name": null, 00:11:36.794 "uuid": "3a1e941e-3ad7-4aff-a5d3-f82e59c2e7f5", 00:11:36.794 "is_configured": false, 00:11:36.794 "data_offset": 0, 00:11:36.794 "data_size": 63488 00:11:36.794 }, 00:11:36.794 { 00:11:36.794 "name": "BaseBdev2", 00:11:36.794 "uuid": "4d46868e-8549-41f8-9009-a3acf2ef620a", 00:11:36.794 "is_configured": true, 00:11:36.794 "data_offset": 2048, 00:11:36.794 "data_size": 63488 00:11:36.794 }, 00:11:36.794 { 00:11:36.794 "name": "BaseBdev3", 00:11:36.794 "uuid": "8c1ac971-3444-4612-828d-62d9126392fc", 00:11:36.794 "is_configured": true, 00:11:36.794 "data_offset": 2048, 00:11:36.794 "data_size": 63488 00:11:36.794 }, 00:11:36.794 { 00:11:36.794 "name": "BaseBdev4", 00:11:36.794 "uuid": "5ac3fce0-c344-46eb-ad9f-7ddb86e2b1ec", 00:11:36.794 "is_configured": true, 00:11:36.794 "data_offset": 2048, 00:11:36.794 "data_size": 63488 00:11:36.794 } 00:11:36.794 ] 00:11:36.794 }' 00:11:36.794 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.794 09:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3a1e941e-3ad7-4aff-a5d3-f82e59c2e7f5 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.364 [2024-12-06 09:49:02.441027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:37.364 [2024-12-06 09:49:02.441314] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:37.364 [2024-12-06 09:49:02.441332] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:37.364 [2024-12-06 09:49:02.441599] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:37.364 [2024-12-06 09:49:02.441767] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:37.364 [2024-12-06 09:49:02.441778] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:37.364 [2024-12-06 09:49:02.441923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:37.364 NewBaseBdev 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.364 [ 00:11:37.364 { 00:11:37.364 "name": "NewBaseBdev", 00:11:37.364 "aliases": [ 00:11:37.364 "3a1e941e-3ad7-4aff-a5d3-f82e59c2e7f5" 00:11:37.364 ], 00:11:37.364 "product_name": "Malloc disk", 00:11:37.364 "block_size": 512, 00:11:37.364 "num_blocks": 65536, 00:11:37.364 "uuid": "3a1e941e-3ad7-4aff-a5d3-f82e59c2e7f5", 00:11:37.364 "assigned_rate_limits": { 00:11:37.364 "rw_ios_per_sec": 0, 00:11:37.364 "rw_mbytes_per_sec": 0, 00:11:37.364 "r_mbytes_per_sec": 0, 00:11:37.364 "w_mbytes_per_sec": 0 00:11:37.364 }, 00:11:37.364 "claimed": true, 00:11:37.364 "claim_type": "exclusive_write", 00:11:37.364 "zoned": false, 00:11:37.364 "supported_io_types": { 00:11:37.364 "read": true, 00:11:37.364 "write": true, 00:11:37.364 "unmap": true, 00:11:37.364 "flush": true, 00:11:37.364 "reset": true, 00:11:37.364 "nvme_admin": false, 00:11:37.364 "nvme_io": false, 00:11:37.364 "nvme_io_md": false, 00:11:37.364 "write_zeroes": true, 00:11:37.364 "zcopy": true, 00:11:37.364 "get_zone_info": false, 00:11:37.364 "zone_management": false, 00:11:37.364 "zone_append": false, 00:11:37.364 "compare": false, 00:11:37.364 "compare_and_write": false, 00:11:37.364 "abort": true, 00:11:37.364 "seek_hole": false, 00:11:37.364 "seek_data": false, 00:11:37.364 "copy": true, 00:11:37.364 "nvme_iov_md": false 00:11:37.364 }, 00:11:37.364 "memory_domains": [ 00:11:37.364 { 00:11:37.364 "dma_device_id": "system", 00:11:37.364 "dma_device_type": 1 00:11:37.364 }, 00:11:37.364 { 00:11:37.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.364 "dma_device_type": 2 00:11:37.364 } 00:11:37.364 ], 00:11:37.364 "driver_specific": {} 00:11:37.364 } 00:11:37.364 ] 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.364 "name": "Existed_Raid", 00:11:37.364 "uuid": "a59b1603-f462-4988-9bf0-fa42a2eadfd3", 00:11:37.364 "strip_size_kb": 0, 00:11:37.364 "state": "online", 00:11:37.364 "raid_level": "raid1", 00:11:37.364 "superblock": true, 00:11:37.364 "num_base_bdevs": 4, 00:11:37.364 "num_base_bdevs_discovered": 4, 00:11:37.364 "num_base_bdevs_operational": 4, 00:11:37.364 "base_bdevs_list": [ 00:11:37.364 { 00:11:37.364 "name": "NewBaseBdev", 00:11:37.364 "uuid": "3a1e941e-3ad7-4aff-a5d3-f82e59c2e7f5", 00:11:37.364 "is_configured": true, 00:11:37.364 "data_offset": 2048, 00:11:37.364 "data_size": 63488 00:11:37.364 }, 00:11:37.364 { 00:11:37.364 "name": "BaseBdev2", 00:11:37.364 "uuid": "4d46868e-8549-41f8-9009-a3acf2ef620a", 00:11:37.364 "is_configured": true, 00:11:37.364 "data_offset": 2048, 00:11:37.364 "data_size": 63488 00:11:37.364 }, 00:11:37.364 { 00:11:37.364 "name": "BaseBdev3", 00:11:37.364 "uuid": "8c1ac971-3444-4612-828d-62d9126392fc", 00:11:37.364 "is_configured": true, 00:11:37.364 "data_offset": 2048, 00:11:37.364 "data_size": 63488 00:11:37.364 }, 00:11:37.364 { 00:11:37.364 "name": "BaseBdev4", 00:11:37.364 "uuid": "5ac3fce0-c344-46eb-ad9f-7ddb86e2b1ec", 00:11:37.364 "is_configured": true, 00:11:37.364 "data_offset": 2048, 00:11:37.364 "data_size": 63488 00:11:37.364 } 00:11:37.364 ] 00:11:37.364 }' 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.364 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.624 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:37.624 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:37.624 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:37.624 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:37.624 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:37.624 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:37.624 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:37.624 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:37.624 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.624 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.624 [2024-12-06 09:49:02.892627] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:37.885 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.885 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:37.885 "name": "Existed_Raid", 00:11:37.885 "aliases": [ 00:11:37.885 "a59b1603-f462-4988-9bf0-fa42a2eadfd3" 00:11:37.885 ], 00:11:37.885 "product_name": "Raid Volume", 00:11:37.885 "block_size": 512, 00:11:37.885 "num_blocks": 63488, 00:11:37.885 "uuid": "a59b1603-f462-4988-9bf0-fa42a2eadfd3", 00:11:37.885 "assigned_rate_limits": { 00:11:37.885 "rw_ios_per_sec": 0, 00:11:37.885 "rw_mbytes_per_sec": 0, 00:11:37.885 "r_mbytes_per_sec": 0, 00:11:37.885 "w_mbytes_per_sec": 0 00:11:37.885 }, 00:11:37.885 "claimed": false, 00:11:37.885 "zoned": false, 00:11:37.885 "supported_io_types": { 00:11:37.885 "read": true, 00:11:37.885 "write": true, 00:11:37.885 "unmap": false, 00:11:37.885 "flush": false, 00:11:37.885 "reset": true, 00:11:37.885 "nvme_admin": false, 00:11:37.885 "nvme_io": false, 00:11:37.885 "nvme_io_md": false, 00:11:37.885 "write_zeroes": true, 00:11:37.885 "zcopy": false, 00:11:37.885 "get_zone_info": false, 00:11:37.885 "zone_management": false, 00:11:37.885 "zone_append": false, 00:11:37.885 "compare": false, 00:11:37.885 "compare_and_write": false, 00:11:37.885 "abort": false, 00:11:37.885 "seek_hole": false, 00:11:37.885 "seek_data": false, 00:11:37.885 "copy": false, 00:11:37.885 "nvme_iov_md": false 00:11:37.885 }, 00:11:37.885 "memory_domains": [ 00:11:37.885 { 00:11:37.885 "dma_device_id": "system", 00:11:37.885 "dma_device_type": 1 00:11:37.885 }, 00:11:37.885 { 00:11:37.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.885 "dma_device_type": 2 00:11:37.885 }, 00:11:37.885 { 00:11:37.885 "dma_device_id": "system", 00:11:37.885 "dma_device_type": 1 00:11:37.885 }, 00:11:37.885 { 00:11:37.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.885 "dma_device_type": 2 00:11:37.885 }, 00:11:37.885 { 00:11:37.885 "dma_device_id": "system", 00:11:37.885 "dma_device_type": 1 00:11:37.885 }, 00:11:37.885 { 00:11:37.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.885 "dma_device_type": 2 00:11:37.885 }, 00:11:37.885 { 00:11:37.885 "dma_device_id": "system", 00:11:37.885 "dma_device_type": 1 00:11:37.885 }, 00:11:37.885 { 00:11:37.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.885 "dma_device_type": 2 00:11:37.885 } 00:11:37.885 ], 00:11:37.885 "driver_specific": { 00:11:37.885 "raid": { 00:11:37.885 "uuid": "a59b1603-f462-4988-9bf0-fa42a2eadfd3", 00:11:37.885 "strip_size_kb": 0, 00:11:37.885 "state": "online", 00:11:37.885 "raid_level": "raid1", 00:11:37.885 "superblock": true, 00:11:37.885 "num_base_bdevs": 4, 00:11:37.885 "num_base_bdevs_discovered": 4, 00:11:37.885 "num_base_bdevs_operational": 4, 00:11:37.885 "base_bdevs_list": [ 00:11:37.885 { 00:11:37.885 "name": "NewBaseBdev", 00:11:37.885 "uuid": "3a1e941e-3ad7-4aff-a5d3-f82e59c2e7f5", 00:11:37.885 "is_configured": true, 00:11:37.885 "data_offset": 2048, 00:11:37.885 "data_size": 63488 00:11:37.885 }, 00:11:37.885 { 00:11:37.886 "name": "BaseBdev2", 00:11:37.886 "uuid": "4d46868e-8549-41f8-9009-a3acf2ef620a", 00:11:37.886 "is_configured": true, 00:11:37.886 "data_offset": 2048, 00:11:37.886 "data_size": 63488 00:11:37.886 }, 00:11:37.886 { 00:11:37.886 "name": "BaseBdev3", 00:11:37.886 "uuid": "8c1ac971-3444-4612-828d-62d9126392fc", 00:11:37.886 "is_configured": true, 00:11:37.886 "data_offset": 2048, 00:11:37.886 "data_size": 63488 00:11:37.886 }, 00:11:37.886 { 00:11:37.886 "name": "BaseBdev4", 00:11:37.886 "uuid": "5ac3fce0-c344-46eb-ad9f-7ddb86e2b1ec", 00:11:37.886 "is_configured": true, 00:11:37.886 "data_offset": 2048, 00:11:37.886 "data_size": 63488 00:11:37.886 } 00:11:37.886 ] 00:11:37.886 } 00:11:37.886 } 00:11:37.886 }' 00:11:37.886 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:37.886 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:37.886 BaseBdev2 00:11:37.886 BaseBdev3 00:11:37.886 BaseBdev4' 00:11:37.886 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.886 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:37.886 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:37.886 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.886 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:37.886 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.886 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.886 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.886 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.886 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.886 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:37.886 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:37.886 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.886 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.886 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.886 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.886 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.886 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.886 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:37.886 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:37.886 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.886 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.886 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.886 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.886 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:37.886 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:37.886 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:37.886 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:37.886 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.886 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.886 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:37.886 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.146 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.146 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.146 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:38.146 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.146 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.146 [2024-12-06 09:49:03.171853] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:38.146 [2024-12-06 09:49:03.171883] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:38.146 [2024-12-06 09:49:03.171991] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:38.147 [2024-12-06 09:49:03.172298] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:38.147 [2024-12-06 09:49:03.172319] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:38.147 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.147 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73779 00:11:38.147 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73779 ']' 00:11:38.147 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73779 00:11:38.147 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:38.147 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:38.147 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73779 00:11:38.147 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:38.147 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:38.147 killing process with pid 73779 00:11:38.147 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73779' 00:11:38.147 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73779 00:11:38.147 [2024-12-06 09:49:03.216042] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:38.147 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73779 00:11:38.406 [2024-12-06 09:49:03.597936] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:39.787 09:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:39.787 00:11:39.787 real 0m11.304s 00:11:39.787 user 0m17.951s 00:11:39.787 sys 0m2.002s 00:11:39.787 09:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.787 09:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.787 ************************************ 00:11:39.787 END TEST raid_state_function_test_sb 00:11:39.787 ************************************ 00:11:39.788 09:49:04 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:39.788 09:49:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:39.788 09:49:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.788 09:49:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:39.788 ************************************ 00:11:39.788 START TEST raid_superblock_test 00:11:39.788 ************************************ 00:11:39.788 09:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:11:39.788 09:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:39.788 09:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:39.788 09:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:39.788 09:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:39.788 09:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:39.788 09:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:39.788 09:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:39.788 09:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:39.788 09:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:39.788 09:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:39.788 09:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:39.788 09:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:39.788 09:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:39.788 09:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:39.788 09:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:39.788 09:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74444 00:11:39.788 09:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:39.788 09:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74444 00:11:39.788 09:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74444 ']' 00:11:39.788 09:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.788 09:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:39.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.788 09:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.788 09:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:39.788 09:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.788 [2024-12-06 09:49:04.884548] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:11:39.788 [2024-12-06 09:49:04.885207] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74444 ] 00:11:39.788 [2024-12-06 09:49:05.058808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.047 [2024-12-06 09:49:05.169715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.304 [2024-12-06 09:49:05.365443] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:40.304 [2024-12-06 09:49:05.365529] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.564 malloc1 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.564 [2024-12-06 09:49:05.747099] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:40.564 [2024-12-06 09:49:05.747170] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.564 [2024-12-06 09:49:05.747194] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:40.564 [2024-12-06 09:49:05.747203] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.564 [2024-12-06 09:49:05.749193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.564 [2024-12-06 09:49:05.749301] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:40.564 pt1 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.564 malloc2 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.564 [2024-12-06 09:49:05.803424] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:40.564 [2024-12-06 09:49:05.803525] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.564 [2024-12-06 09:49:05.803569] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:40.564 [2024-12-06 09:49:05.803599] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.564 [2024-12-06 09:49:05.805709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.564 [2024-12-06 09:49:05.805781] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:40.564 pt2 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.564 09:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.824 malloc3 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.824 [2024-12-06 09:49:05.872927] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:40.824 [2024-12-06 09:49:05.873048] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.824 [2024-12-06 09:49:05.873087] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:40.824 [2024-12-06 09:49:05.873114] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.824 [2024-12-06 09:49:05.875086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.824 [2024-12-06 09:49:05.875160] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:40.824 pt3 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.824 malloc4 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.824 [2024-12-06 09:49:05.933621] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:40.824 [2024-12-06 09:49:05.933754] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.824 [2024-12-06 09:49:05.933783] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:40.824 [2024-12-06 09:49:05.933794] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.824 [2024-12-06 09:49:05.936119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.824 [2024-12-06 09:49:05.936160] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:40.824 pt4 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.824 [2024-12-06 09:49:05.945596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:40.824 [2024-12-06 09:49:05.947344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:40.824 [2024-12-06 09:49:05.947407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:40.824 [2024-12-06 09:49:05.947469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:40.824 [2024-12-06 09:49:05.947657] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:40.824 [2024-12-06 09:49:05.947672] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:40.824 [2024-12-06 09:49:05.947926] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:40.824 [2024-12-06 09:49:05.948103] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:40.824 [2024-12-06 09:49:05.948118] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:40.824 [2024-12-06 09:49:05.948271] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.824 09:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.824 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.824 "name": "raid_bdev1", 00:11:40.824 "uuid": "de63b3d9-ca5a-4a9d-94ae-753b9fd7515c", 00:11:40.824 "strip_size_kb": 0, 00:11:40.824 "state": "online", 00:11:40.824 "raid_level": "raid1", 00:11:40.824 "superblock": true, 00:11:40.824 "num_base_bdevs": 4, 00:11:40.824 "num_base_bdevs_discovered": 4, 00:11:40.824 "num_base_bdevs_operational": 4, 00:11:40.825 "base_bdevs_list": [ 00:11:40.825 { 00:11:40.825 "name": "pt1", 00:11:40.825 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:40.825 "is_configured": true, 00:11:40.825 "data_offset": 2048, 00:11:40.825 "data_size": 63488 00:11:40.825 }, 00:11:40.825 { 00:11:40.825 "name": "pt2", 00:11:40.825 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:40.825 "is_configured": true, 00:11:40.825 "data_offset": 2048, 00:11:40.825 "data_size": 63488 00:11:40.825 }, 00:11:40.825 { 00:11:40.825 "name": "pt3", 00:11:40.825 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:40.825 "is_configured": true, 00:11:40.825 "data_offset": 2048, 00:11:40.825 "data_size": 63488 00:11:40.825 }, 00:11:40.825 { 00:11:40.825 "name": "pt4", 00:11:40.825 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:40.825 "is_configured": true, 00:11:40.825 "data_offset": 2048, 00:11:40.825 "data_size": 63488 00:11:40.825 } 00:11:40.825 ] 00:11:40.825 }' 00:11:40.825 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.825 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.393 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:41.393 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:41.393 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:41.393 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:41.393 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:41.393 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:41.393 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:41.393 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.393 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.393 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:41.393 [2024-12-06 09:49:06.401140] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:41.393 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.393 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:41.393 "name": "raid_bdev1", 00:11:41.393 "aliases": [ 00:11:41.393 "de63b3d9-ca5a-4a9d-94ae-753b9fd7515c" 00:11:41.393 ], 00:11:41.393 "product_name": "Raid Volume", 00:11:41.393 "block_size": 512, 00:11:41.393 "num_blocks": 63488, 00:11:41.393 "uuid": "de63b3d9-ca5a-4a9d-94ae-753b9fd7515c", 00:11:41.393 "assigned_rate_limits": { 00:11:41.393 "rw_ios_per_sec": 0, 00:11:41.393 "rw_mbytes_per_sec": 0, 00:11:41.393 "r_mbytes_per_sec": 0, 00:11:41.393 "w_mbytes_per_sec": 0 00:11:41.393 }, 00:11:41.393 "claimed": false, 00:11:41.393 "zoned": false, 00:11:41.393 "supported_io_types": { 00:11:41.393 "read": true, 00:11:41.393 "write": true, 00:11:41.393 "unmap": false, 00:11:41.393 "flush": false, 00:11:41.393 "reset": true, 00:11:41.393 "nvme_admin": false, 00:11:41.393 "nvme_io": false, 00:11:41.393 "nvme_io_md": false, 00:11:41.393 "write_zeroes": true, 00:11:41.393 "zcopy": false, 00:11:41.393 "get_zone_info": false, 00:11:41.393 "zone_management": false, 00:11:41.393 "zone_append": false, 00:11:41.394 "compare": false, 00:11:41.394 "compare_and_write": false, 00:11:41.394 "abort": false, 00:11:41.394 "seek_hole": false, 00:11:41.394 "seek_data": false, 00:11:41.394 "copy": false, 00:11:41.394 "nvme_iov_md": false 00:11:41.394 }, 00:11:41.394 "memory_domains": [ 00:11:41.394 { 00:11:41.394 "dma_device_id": "system", 00:11:41.394 "dma_device_type": 1 00:11:41.394 }, 00:11:41.394 { 00:11:41.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.394 "dma_device_type": 2 00:11:41.394 }, 00:11:41.394 { 00:11:41.394 "dma_device_id": "system", 00:11:41.394 "dma_device_type": 1 00:11:41.394 }, 00:11:41.394 { 00:11:41.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.394 "dma_device_type": 2 00:11:41.394 }, 00:11:41.394 { 00:11:41.394 "dma_device_id": "system", 00:11:41.394 "dma_device_type": 1 00:11:41.394 }, 00:11:41.394 { 00:11:41.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.394 "dma_device_type": 2 00:11:41.394 }, 00:11:41.394 { 00:11:41.394 "dma_device_id": "system", 00:11:41.394 "dma_device_type": 1 00:11:41.394 }, 00:11:41.394 { 00:11:41.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.394 "dma_device_type": 2 00:11:41.394 } 00:11:41.394 ], 00:11:41.394 "driver_specific": { 00:11:41.394 "raid": { 00:11:41.394 "uuid": "de63b3d9-ca5a-4a9d-94ae-753b9fd7515c", 00:11:41.394 "strip_size_kb": 0, 00:11:41.394 "state": "online", 00:11:41.394 "raid_level": "raid1", 00:11:41.394 "superblock": true, 00:11:41.394 "num_base_bdevs": 4, 00:11:41.394 "num_base_bdevs_discovered": 4, 00:11:41.394 "num_base_bdevs_operational": 4, 00:11:41.394 "base_bdevs_list": [ 00:11:41.394 { 00:11:41.394 "name": "pt1", 00:11:41.394 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:41.394 "is_configured": true, 00:11:41.394 "data_offset": 2048, 00:11:41.394 "data_size": 63488 00:11:41.394 }, 00:11:41.394 { 00:11:41.394 "name": "pt2", 00:11:41.394 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:41.394 "is_configured": true, 00:11:41.394 "data_offset": 2048, 00:11:41.394 "data_size": 63488 00:11:41.394 }, 00:11:41.394 { 00:11:41.394 "name": "pt3", 00:11:41.394 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:41.394 "is_configured": true, 00:11:41.394 "data_offset": 2048, 00:11:41.394 "data_size": 63488 00:11:41.394 }, 00:11:41.394 { 00:11:41.394 "name": "pt4", 00:11:41.394 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:41.394 "is_configured": true, 00:11:41.394 "data_offset": 2048, 00:11:41.394 "data_size": 63488 00:11:41.394 } 00:11:41.394 ] 00:11:41.394 } 00:11:41.394 } 00:11:41.394 }' 00:11:41.394 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:41.394 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:41.394 pt2 00:11:41.394 pt3 00:11:41.394 pt4' 00:11:41.394 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.394 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:41.394 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.394 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:41.394 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.394 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.394 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.394 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.394 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.394 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.394 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.394 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.394 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:41.394 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.394 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.394 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.394 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.394 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.394 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.394 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:41.394 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.394 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.394 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.394 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.394 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.394 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.394 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.394 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:41.394 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.394 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.394 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.655 [2024-12-06 09:49:06.712540] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=de63b3d9-ca5a-4a9d-94ae-753b9fd7515c 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z de63b3d9-ca5a-4a9d-94ae-753b9fd7515c ']' 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.655 [2024-12-06 09:49:06.736209] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:41.655 [2024-12-06 09:49:06.736233] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:41.655 [2024-12-06 09:49:06.736306] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:41.655 [2024-12-06 09:49:06.736391] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:41.655 [2024-12-06 09:49:06.736405] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.655 [2024-12-06 09:49:06.871989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:41.655 [2024-12-06 09:49:06.873890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:41.655 [2024-12-06 09:49:06.873982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:41.655 [2024-12-06 09:49:06.874038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:41.655 [2024-12-06 09:49:06.874120] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:41.655 [2024-12-06 09:49:06.874273] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:41.655 [2024-12-06 09:49:06.874335] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:41.655 [2024-12-06 09:49:06.874391] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:41.655 [2024-12-06 09:49:06.874442] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:41.655 [2024-12-06 09:49:06.874474] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:41.655 request: 00:11:41.655 { 00:11:41.655 "name": "raid_bdev1", 00:11:41.655 "raid_level": "raid1", 00:11:41.655 "base_bdevs": [ 00:11:41.655 "malloc1", 00:11:41.655 "malloc2", 00:11:41.655 "malloc3", 00:11:41.655 "malloc4" 00:11:41.655 ], 00:11:41.655 "superblock": false, 00:11:41.655 "method": "bdev_raid_create", 00:11:41.655 "req_id": 1 00:11:41.655 } 00:11:41.655 Got JSON-RPC error response 00:11:41.655 response: 00:11:41.655 { 00:11:41.655 "code": -17, 00:11:41.655 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:41.655 } 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:41.655 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:41.656 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.656 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.656 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.656 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:41.656 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.916 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:41.916 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:41.916 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:41.916 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.916 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.916 [2024-12-06 09:49:06.943882] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:41.916 [2024-12-06 09:49:06.943942] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.916 [2024-12-06 09:49:06.943962] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:41.916 [2024-12-06 09:49:06.943972] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.916 [2024-12-06 09:49:06.946261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.916 [2024-12-06 09:49:06.946303] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:41.916 [2024-12-06 09:49:06.946387] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:41.916 [2024-12-06 09:49:06.946450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:41.916 pt1 00:11:41.916 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.916 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:41.916 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.916 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.916 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.916 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.916 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.916 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.916 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.916 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.916 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.916 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.916 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.916 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.916 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.916 09:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.916 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.916 "name": "raid_bdev1", 00:11:41.916 "uuid": "de63b3d9-ca5a-4a9d-94ae-753b9fd7515c", 00:11:41.916 "strip_size_kb": 0, 00:11:41.916 "state": "configuring", 00:11:41.916 "raid_level": "raid1", 00:11:41.916 "superblock": true, 00:11:41.916 "num_base_bdevs": 4, 00:11:41.916 "num_base_bdevs_discovered": 1, 00:11:41.916 "num_base_bdevs_operational": 4, 00:11:41.916 "base_bdevs_list": [ 00:11:41.916 { 00:11:41.916 "name": "pt1", 00:11:41.917 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:41.917 "is_configured": true, 00:11:41.917 "data_offset": 2048, 00:11:41.917 "data_size": 63488 00:11:41.917 }, 00:11:41.917 { 00:11:41.917 "name": null, 00:11:41.917 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:41.917 "is_configured": false, 00:11:41.917 "data_offset": 2048, 00:11:41.917 "data_size": 63488 00:11:41.917 }, 00:11:41.917 { 00:11:41.917 "name": null, 00:11:41.917 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:41.917 "is_configured": false, 00:11:41.917 "data_offset": 2048, 00:11:41.917 "data_size": 63488 00:11:41.917 }, 00:11:41.917 { 00:11:41.917 "name": null, 00:11:41.917 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:41.917 "is_configured": false, 00:11:41.917 "data_offset": 2048, 00:11:41.917 "data_size": 63488 00:11:41.917 } 00:11:41.917 ] 00:11:41.917 }' 00:11:41.917 09:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.917 09:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.175 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:42.175 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:42.175 09:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.175 09:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.175 [2024-12-06 09:49:07.363192] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:42.175 [2024-12-06 09:49:07.363258] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.175 [2024-12-06 09:49:07.363280] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:42.175 [2024-12-06 09:49:07.363291] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.175 [2024-12-06 09:49:07.363713] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.175 [2024-12-06 09:49:07.363741] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:42.175 [2024-12-06 09:49:07.363833] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:42.175 [2024-12-06 09:49:07.363858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:42.175 pt2 00:11:42.175 09:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.175 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:42.175 09:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.175 09:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.175 [2024-12-06 09:49:07.375156] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:42.175 09:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.176 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:42.176 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.176 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.176 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.176 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.176 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.176 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.176 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.176 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.176 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.176 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.176 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.176 09:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.176 09:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.176 09:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.176 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.176 "name": "raid_bdev1", 00:11:42.176 "uuid": "de63b3d9-ca5a-4a9d-94ae-753b9fd7515c", 00:11:42.176 "strip_size_kb": 0, 00:11:42.176 "state": "configuring", 00:11:42.176 "raid_level": "raid1", 00:11:42.176 "superblock": true, 00:11:42.176 "num_base_bdevs": 4, 00:11:42.176 "num_base_bdevs_discovered": 1, 00:11:42.176 "num_base_bdevs_operational": 4, 00:11:42.176 "base_bdevs_list": [ 00:11:42.176 { 00:11:42.176 "name": "pt1", 00:11:42.176 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:42.176 "is_configured": true, 00:11:42.176 "data_offset": 2048, 00:11:42.176 "data_size": 63488 00:11:42.176 }, 00:11:42.176 { 00:11:42.176 "name": null, 00:11:42.176 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:42.176 "is_configured": false, 00:11:42.176 "data_offset": 0, 00:11:42.176 "data_size": 63488 00:11:42.176 }, 00:11:42.176 { 00:11:42.176 "name": null, 00:11:42.176 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:42.176 "is_configured": false, 00:11:42.176 "data_offset": 2048, 00:11:42.176 "data_size": 63488 00:11:42.176 }, 00:11:42.176 { 00:11:42.176 "name": null, 00:11:42.176 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:42.176 "is_configured": false, 00:11:42.176 "data_offset": 2048, 00:11:42.176 "data_size": 63488 00:11:42.176 } 00:11:42.176 ] 00:11:42.176 }' 00:11:42.176 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.176 09:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.744 [2024-12-06 09:49:07.798414] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:42.744 [2024-12-06 09:49:07.798477] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.744 [2024-12-06 09:49:07.798498] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:42.744 [2024-12-06 09:49:07.798507] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.744 [2024-12-06 09:49:07.798970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.744 [2024-12-06 09:49:07.799000] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:42.744 [2024-12-06 09:49:07.799085] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:42.744 [2024-12-06 09:49:07.799109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:42.744 pt2 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.744 [2024-12-06 09:49:07.810368] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:42.744 [2024-12-06 09:49:07.810419] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.744 [2024-12-06 09:49:07.810438] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:42.744 [2024-12-06 09:49:07.810447] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.744 [2024-12-06 09:49:07.810825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.744 [2024-12-06 09:49:07.810849] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:42.744 [2024-12-06 09:49:07.810915] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:42.744 [2024-12-06 09:49:07.810933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:42.744 pt3 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.744 [2024-12-06 09:49:07.822318] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:42.744 [2024-12-06 09:49:07.822359] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.744 [2024-12-06 09:49:07.822374] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:42.744 [2024-12-06 09:49:07.822382] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.744 [2024-12-06 09:49:07.822748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.744 [2024-12-06 09:49:07.822773] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:42.744 [2024-12-06 09:49:07.822831] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:42.744 [2024-12-06 09:49:07.822854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:42.744 [2024-12-06 09:49:07.823001] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:42.744 [2024-12-06 09:49:07.823014] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:42.744 [2024-12-06 09:49:07.823256] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:42.744 [2024-12-06 09:49:07.823418] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:42.744 [2024-12-06 09:49:07.823435] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:42.744 [2024-12-06 09:49:07.823573] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.744 pt4 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.744 "name": "raid_bdev1", 00:11:42.744 "uuid": "de63b3d9-ca5a-4a9d-94ae-753b9fd7515c", 00:11:42.744 "strip_size_kb": 0, 00:11:42.744 "state": "online", 00:11:42.744 "raid_level": "raid1", 00:11:42.744 "superblock": true, 00:11:42.744 "num_base_bdevs": 4, 00:11:42.744 "num_base_bdevs_discovered": 4, 00:11:42.744 "num_base_bdevs_operational": 4, 00:11:42.744 "base_bdevs_list": [ 00:11:42.744 { 00:11:42.744 "name": "pt1", 00:11:42.744 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:42.744 "is_configured": true, 00:11:42.744 "data_offset": 2048, 00:11:42.744 "data_size": 63488 00:11:42.744 }, 00:11:42.744 { 00:11:42.744 "name": "pt2", 00:11:42.744 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:42.744 "is_configured": true, 00:11:42.744 "data_offset": 2048, 00:11:42.744 "data_size": 63488 00:11:42.744 }, 00:11:42.744 { 00:11:42.744 "name": "pt3", 00:11:42.744 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:42.744 "is_configured": true, 00:11:42.744 "data_offset": 2048, 00:11:42.744 "data_size": 63488 00:11:42.744 }, 00:11:42.744 { 00:11:42.744 "name": "pt4", 00:11:42.744 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:42.744 "is_configured": true, 00:11:42.744 "data_offset": 2048, 00:11:42.744 "data_size": 63488 00:11:42.744 } 00:11:42.744 ] 00:11:42.744 }' 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.744 09:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.315 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:43.315 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:43.315 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:43.315 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:43.315 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:43.315 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:43.315 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:43.315 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:43.315 09:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.315 09:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.315 [2024-12-06 09:49:08.293915] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:43.315 09:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.315 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:43.315 "name": "raid_bdev1", 00:11:43.315 "aliases": [ 00:11:43.315 "de63b3d9-ca5a-4a9d-94ae-753b9fd7515c" 00:11:43.315 ], 00:11:43.315 "product_name": "Raid Volume", 00:11:43.315 "block_size": 512, 00:11:43.315 "num_blocks": 63488, 00:11:43.315 "uuid": "de63b3d9-ca5a-4a9d-94ae-753b9fd7515c", 00:11:43.315 "assigned_rate_limits": { 00:11:43.315 "rw_ios_per_sec": 0, 00:11:43.315 "rw_mbytes_per_sec": 0, 00:11:43.315 "r_mbytes_per_sec": 0, 00:11:43.315 "w_mbytes_per_sec": 0 00:11:43.315 }, 00:11:43.315 "claimed": false, 00:11:43.315 "zoned": false, 00:11:43.315 "supported_io_types": { 00:11:43.315 "read": true, 00:11:43.315 "write": true, 00:11:43.315 "unmap": false, 00:11:43.315 "flush": false, 00:11:43.315 "reset": true, 00:11:43.315 "nvme_admin": false, 00:11:43.315 "nvme_io": false, 00:11:43.315 "nvme_io_md": false, 00:11:43.315 "write_zeroes": true, 00:11:43.315 "zcopy": false, 00:11:43.315 "get_zone_info": false, 00:11:43.315 "zone_management": false, 00:11:43.315 "zone_append": false, 00:11:43.315 "compare": false, 00:11:43.315 "compare_and_write": false, 00:11:43.315 "abort": false, 00:11:43.315 "seek_hole": false, 00:11:43.315 "seek_data": false, 00:11:43.315 "copy": false, 00:11:43.315 "nvme_iov_md": false 00:11:43.315 }, 00:11:43.315 "memory_domains": [ 00:11:43.315 { 00:11:43.315 "dma_device_id": "system", 00:11:43.315 "dma_device_type": 1 00:11:43.315 }, 00:11:43.315 { 00:11:43.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.315 "dma_device_type": 2 00:11:43.315 }, 00:11:43.315 { 00:11:43.315 "dma_device_id": "system", 00:11:43.315 "dma_device_type": 1 00:11:43.315 }, 00:11:43.315 { 00:11:43.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.315 "dma_device_type": 2 00:11:43.315 }, 00:11:43.315 { 00:11:43.315 "dma_device_id": "system", 00:11:43.315 "dma_device_type": 1 00:11:43.315 }, 00:11:43.315 { 00:11:43.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.315 "dma_device_type": 2 00:11:43.315 }, 00:11:43.315 { 00:11:43.315 "dma_device_id": "system", 00:11:43.315 "dma_device_type": 1 00:11:43.315 }, 00:11:43.315 { 00:11:43.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.315 "dma_device_type": 2 00:11:43.315 } 00:11:43.315 ], 00:11:43.315 "driver_specific": { 00:11:43.315 "raid": { 00:11:43.315 "uuid": "de63b3d9-ca5a-4a9d-94ae-753b9fd7515c", 00:11:43.315 "strip_size_kb": 0, 00:11:43.315 "state": "online", 00:11:43.315 "raid_level": "raid1", 00:11:43.315 "superblock": true, 00:11:43.315 "num_base_bdevs": 4, 00:11:43.315 "num_base_bdevs_discovered": 4, 00:11:43.315 "num_base_bdevs_operational": 4, 00:11:43.315 "base_bdevs_list": [ 00:11:43.315 { 00:11:43.315 "name": "pt1", 00:11:43.315 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:43.315 "is_configured": true, 00:11:43.315 "data_offset": 2048, 00:11:43.315 "data_size": 63488 00:11:43.315 }, 00:11:43.315 { 00:11:43.315 "name": "pt2", 00:11:43.315 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:43.315 "is_configured": true, 00:11:43.315 "data_offset": 2048, 00:11:43.315 "data_size": 63488 00:11:43.315 }, 00:11:43.315 { 00:11:43.315 "name": "pt3", 00:11:43.315 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:43.315 "is_configured": true, 00:11:43.315 "data_offset": 2048, 00:11:43.315 "data_size": 63488 00:11:43.315 }, 00:11:43.315 { 00:11:43.315 "name": "pt4", 00:11:43.315 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:43.315 "is_configured": true, 00:11:43.315 "data_offset": 2048, 00:11:43.315 "data_size": 63488 00:11:43.315 } 00:11:43.315 ] 00:11:43.315 } 00:11:43.315 } 00:11:43.315 }' 00:11:43.315 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:43.315 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:43.315 pt2 00:11:43.315 pt3 00:11:43.315 pt4' 00:11:43.315 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.315 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:43.315 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.315 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:43.315 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.315 09:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.315 09:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.315 09:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.315 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.315 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.315 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.315 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:43.315 09:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.315 09:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.316 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.316 09:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.316 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.316 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.316 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.316 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.316 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:43.316 09:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.316 09:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.316 09:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.316 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.316 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.316 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.316 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.316 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:43.316 09:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.316 09:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.316 09:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.575 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.575 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.575 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:43.575 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:43.575 09:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.575 09:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.575 [2024-12-06 09:49:08.601322] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:43.575 09:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.575 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' de63b3d9-ca5a-4a9d-94ae-753b9fd7515c '!=' de63b3d9-ca5a-4a9d-94ae-753b9fd7515c ']' 00:11:43.575 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:43.575 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:43.575 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:43.575 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:43.575 09:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.575 09:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.575 [2024-12-06 09:49:08.664973] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:43.575 09:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.575 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:43.575 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:43.575 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.575 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.575 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.575 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:43.575 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.575 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.575 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.575 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.575 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.575 09:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.575 09:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.575 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.575 09:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.575 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.575 "name": "raid_bdev1", 00:11:43.575 "uuid": "de63b3d9-ca5a-4a9d-94ae-753b9fd7515c", 00:11:43.575 "strip_size_kb": 0, 00:11:43.575 "state": "online", 00:11:43.575 "raid_level": "raid1", 00:11:43.575 "superblock": true, 00:11:43.575 "num_base_bdevs": 4, 00:11:43.575 "num_base_bdevs_discovered": 3, 00:11:43.575 "num_base_bdevs_operational": 3, 00:11:43.575 "base_bdevs_list": [ 00:11:43.575 { 00:11:43.575 "name": null, 00:11:43.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.575 "is_configured": false, 00:11:43.575 "data_offset": 0, 00:11:43.575 "data_size": 63488 00:11:43.575 }, 00:11:43.575 { 00:11:43.575 "name": "pt2", 00:11:43.575 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:43.575 "is_configured": true, 00:11:43.575 "data_offset": 2048, 00:11:43.575 "data_size": 63488 00:11:43.575 }, 00:11:43.575 { 00:11:43.575 "name": "pt3", 00:11:43.575 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:43.575 "is_configured": true, 00:11:43.575 "data_offset": 2048, 00:11:43.575 "data_size": 63488 00:11:43.575 }, 00:11:43.575 { 00:11:43.575 "name": "pt4", 00:11:43.575 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:43.575 "is_configured": true, 00:11:43.575 "data_offset": 2048, 00:11:43.575 "data_size": 63488 00:11:43.575 } 00:11:43.575 ] 00:11:43.575 }' 00:11:43.575 09:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.575 09:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.835 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:43.835 09:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.835 09:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.835 [2024-12-06 09:49:09.104192] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:43.835 [2024-12-06 09:49:09.104226] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:43.835 [2024-12-06 09:49:09.104316] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:43.835 [2024-12-06 09:49:09.104395] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:43.835 [2024-12-06 09:49:09.104407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:44.094 09:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.094 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.094 09:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.094 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:44.094 09:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.094 09:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.094 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:44.094 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:44.094 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:44.094 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:44.094 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:44.094 09:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.094 09:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.094 09:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.094 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:44.094 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:44.094 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:44.094 09:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.095 09:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.095 09:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.095 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:44.095 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:44.095 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:11:44.095 09:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.095 09:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.095 09:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.095 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:44.095 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:44.095 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:44.095 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:44.095 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:44.095 09:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.095 09:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.095 [2024-12-06 09:49:09.188006] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:44.095 [2024-12-06 09:49:09.188061] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.095 [2024-12-06 09:49:09.188079] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:44.095 [2024-12-06 09:49:09.188087] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.095 [2024-12-06 09:49:09.190270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.095 [2024-12-06 09:49:09.190307] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:44.095 [2024-12-06 09:49:09.190386] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:44.095 [2024-12-06 09:49:09.190436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:44.095 pt2 00:11:44.095 09:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.095 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:44.095 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:44.095 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:44.095 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.095 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.095 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:44.095 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.095 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.095 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.095 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.095 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.095 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.095 09:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.095 09:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.095 09:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.095 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.095 "name": "raid_bdev1", 00:11:44.095 "uuid": "de63b3d9-ca5a-4a9d-94ae-753b9fd7515c", 00:11:44.095 "strip_size_kb": 0, 00:11:44.095 "state": "configuring", 00:11:44.095 "raid_level": "raid1", 00:11:44.095 "superblock": true, 00:11:44.095 "num_base_bdevs": 4, 00:11:44.095 "num_base_bdevs_discovered": 1, 00:11:44.095 "num_base_bdevs_operational": 3, 00:11:44.095 "base_bdevs_list": [ 00:11:44.095 { 00:11:44.095 "name": null, 00:11:44.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.095 "is_configured": false, 00:11:44.095 "data_offset": 2048, 00:11:44.095 "data_size": 63488 00:11:44.095 }, 00:11:44.095 { 00:11:44.095 "name": "pt2", 00:11:44.095 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:44.095 "is_configured": true, 00:11:44.095 "data_offset": 2048, 00:11:44.095 "data_size": 63488 00:11:44.095 }, 00:11:44.095 { 00:11:44.095 "name": null, 00:11:44.095 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:44.095 "is_configured": false, 00:11:44.095 "data_offset": 2048, 00:11:44.095 "data_size": 63488 00:11:44.095 }, 00:11:44.095 { 00:11:44.095 "name": null, 00:11:44.095 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:44.095 "is_configured": false, 00:11:44.095 "data_offset": 2048, 00:11:44.095 "data_size": 63488 00:11:44.095 } 00:11:44.095 ] 00:11:44.095 }' 00:11:44.095 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.095 09:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.666 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:44.666 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:44.666 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:44.666 09:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.666 09:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.666 [2024-12-06 09:49:09.639312] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:44.666 [2024-12-06 09:49:09.639394] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.667 [2024-12-06 09:49:09.639430] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:11:44.667 [2024-12-06 09:49:09.639439] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.667 [2024-12-06 09:49:09.639948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.667 [2024-12-06 09:49:09.639977] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:44.667 [2024-12-06 09:49:09.640074] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:44.667 [2024-12-06 09:49:09.640100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:44.667 pt3 00:11:44.667 09:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.667 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:44.667 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:44.667 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:44.667 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.667 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.667 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:44.667 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.667 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.667 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.667 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.667 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.667 09:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.667 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.667 09:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.667 09:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.667 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.667 "name": "raid_bdev1", 00:11:44.667 "uuid": "de63b3d9-ca5a-4a9d-94ae-753b9fd7515c", 00:11:44.667 "strip_size_kb": 0, 00:11:44.667 "state": "configuring", 00:11:44.667 "raid_level": "raid1", 00:11:44.667 "superblock": true, 00:11:44.667 "num_base_bdevs": 4, 00:11:44.667 "num_base_bdevs_discovered": 2, 00:11:44.667 "num_base_bdevs_operational": 3, 00:11:44.667 "base_bdevs_list": [ 00:11:44.667 { 00:11:44.667 "name": null, 00:11:44.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.667 "is_configured": false, 00:11:44.667 "data_offset": 2048, 00:11:44.667 "data_size": 63488 00:11:44.667 }, 00:11:44.667 { 00:11:44.667 "name": "pt2", 00:11:44.667 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:44.667 "is_configured": true, 00:11:44.667 "data_offset": 2048, 00:11:44.667 "data_size": 63488 00:11:44.667 }, 00:11:44.667 { 00:11:44.667 "name": "pt3", 00:11:44.667 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:44.667 "is_configured": true, 00:11:44.667 "data_offset": 2048, 00:11:44.667 "data_size": 63488 00:11:44.667 }, 00:11:44.667 { 00:11:44.667 "name": null, 00:11:44.667 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:44.667 "is_configured": false, 00:11:44.667 "data_offset": 2048, 00:11:44.667 "data_size": 63488 00:11:44.667 } 00:11:44.667 ] 00:11:44.667 }' 00:11:44.667 09:49:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.667 09:49:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.927 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:44.927 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:44.927 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:11:44.927 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:44.927 09:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.927 09:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.927 [2024-12-06 09:49:10.066585] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:44.927 [2024-12-06 09:49:10.066654] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.927 [2024-12-06 09:49:10.066702] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:11:44.927 [2024-12-06 09:49:10.066715] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.927 [2024-12-06 09:49:10.067175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.927 [2024-12-06 09:49:10.067201] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:44.927 [2024-12-06 09:49:10.067292] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:44.927 [2024-12-06 09:49:10.067320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:44.927 [2024-12-06 09:49:10.067457] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:44.927 [2024-12-06 09:49:10.067470] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:44.927 [2024-12-06 09:49:10.067701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:44.927 [2024-12-06 09:49:10.067882] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:44.927 [2024-12-06 09:49:10.067902] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:44.927 [2024-12-06 09:49:10.068034] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.927 pt4 00:11:44.927 09:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.927 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:44.928 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:44.928 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.928 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.928 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.928 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:44.928 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.928 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.928 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.928 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.928 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.928 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.928 09:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.928 09:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.928 09:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.928 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.928 "name": "raid_bdev1", 00:11:44.928 "uuid": "de63b3d9-ca5a-4a9d-94ae-753b9fd7515c", 00:11:44.928 "strip_size_kb": 0, 00:11:44.928 "state": "online", 00:11:44.928 "raid_level": "raid1", 00:11:44.928 "superblock": true, 00:11:44.928 "num_base_bdevs": 4, 00:11:44.928 "num_base_bdevs_discovered": 3, 00:11:44.928 "num_base_bdevs_operational": 3, 00:11:44.928 "base_bdevs_list": [ 00:11:44.928 { 00:11:44.928 "name": null, 00:11:44.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.928 "is_configured": false, 00:11:44.928 "data_offset": 2048, 00:11:44.928 "data_size": 63488 00:11:44.928 }, 00:11:44.928 { 00:11:44.928 "name": "pt2", 00:11:44.928 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:44.928 "is_configured": true, 00:11:44.928 "data_offset": 2048, 00:11:44.928 "data_size": 63488 00:11:44.928 }, 00:11:44.928 { 00:11:44.928 "name": "pt3", 00:11:44.928 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:44.928 "is_configured": true, 00:11:44.928 "data_offset": 2048, 00:11:44.928 "data_size": 63488 00:11:44.928 }, 00:11:44.928 { 00:11:44.928 "name": "pt4", 00:11:44.928 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:44.928 "is_configured": true, 00:11:44.928 "data_offset": 2048, 00:11:44.928 "data_size": 63488 00:11:44.928 } 00:11:44.928 ] 00:11:44.928 }' 00:11:44.928 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.928 09:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.187 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:45.187 09:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.187 09:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.187 [2024-12-06 09:49:10.445894] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:45.187 [2024-12-06 09:49:10.445927] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:45.187 [2024-12-06 09:49:10.446009] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:45.187 [2024-12-06 09:49:10.446081] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:45.187 [2024-12-06 09:49:10.446097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:45.187 09:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.187 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.187 09:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.187 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:45.187 09:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.455 09:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.455 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:45.455 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:45.455 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:11:45.455 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:11:45.455 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:11:45.455 09:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.455 09:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.455 09:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.455 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:45.455 09:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.455 09:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.455 [2024-12-06 09:49:10.521756] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:45.455 [2024-12-06 09:49:10.521823] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.455 [2024-12-06 09:49:10.521842] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:11:45.455 [2024-12-06 09:49:10.521854] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.455 [2024-12-06 09:49:10.523981] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.455 [2024-12-06 09:49:10.524023] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:45.455 [2024-12-06 09:49:10.524120] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:45.455 [2024-12-06 09:49:10.524179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:45.455 [2024-12-06 09:49:10.524325] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:45.455 [2024-12-06 09:49:10.524343] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:45.455 [2024-12-06 09:49:10.524359] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:45.455 [2024-12-06 09:49:10.524419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:45.455 [2024-12-06 09:49:10.524522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:45.455 pt1 00:11:45.455 09:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.455 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:11:45.455 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:45.455 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.455 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.455 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.455 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.455 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:45.455 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.455 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.455 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.455 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.455 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.455 09:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.455 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.455 09:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.455 09:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.455 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.455 "name": "raid_bdev1", 00:11:45.455 "uuid": "de63b3d9-ca5a-4a9d-94ae-753b9fd7515c", 00:11:45.455 "strip_size_kb": 0, 00:11:45.455 "state": "configuring", 00:11:45.455 "raid_level": "raid1", 00:11:45.455 "superblock": true, 00:11:45.455 "num_base_bdevs": 4, 00:11:45.455 "num_base_bdevs_discovered": 2, 00:11:45.455 "num_base_bdevs_operational": 3, 00:11:45.455 "base_bdevs_list": [ 00:11:45.455 { 00:11:45.455 "name": null, 00:11:45.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.455 "is_configured": false, 00:11:45.455 "data_offset": 2048, 00:11:45.455 "data_size": 63488 00:11:45.455 }, 00:11:45.455 { 00:11:45.455 "name": "pt2", 00:11:45.455 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:45.455 "is_configured": true, 00:11:45.455 "data_offset": 2048, 00:11:45.455 "data_size": 63488 00:11:45.455 }, 00:11:45.455 { 00:11:45.455 "name": "pt3", 00:11:45.455 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:45.455 "is_configured": true, 00:11:45.455 "data_offset": 2048, 00:11:45.455 "data_size": 63488 00:11:45.455 }, 00:11:45.455 { 00:11:45.455 "name": null, 00:11:45.455 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:45.455 "is_configured": false, 00:11:45.455 "data_offset": 2048, 00:11:45.455 "data_size": 63488 00:11:45.455 } 00:11:45.455 ] 00:11:45.455 }' 00:11:45.455 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.455 09:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.726 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:45.726 09:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.726 09:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.726 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:45.726 09:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.985 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:45.985 09:49:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:45.985 09:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.985 09:49:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.985 [2024-12-06 09:49:11.004960] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:45.985 [2024-12-06 09:49:11.005028] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.985 [2024-12-06 09:49:11.005050] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:11:45.985 [2024-12-06 09:49:11.005059] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.985 [2024-12-06 09:49:11.005524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.985 [2024-12-06 09:49:11.005551] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:45.985 [2024-12-06 09:49:11.005639] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:45.985 [2024-12-06 09:49:11.005666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:45.985 [2024-12-06 09:49:11.005800] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:45.985 [2024-12-06 09:49:11.005815] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:45.985 [2024-12-06 09:49:11.006068] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:11:45.985 [2024-12-06 09:49:11.006237] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:45.985 [2024-12-06 09:49:11.006253] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:45.985 [2024-12-06 09:49:11.006404] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.985 pt4 00:11:45.985 09:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.985 09:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:45.985 09:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.985 09:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:45.985 09:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.985 09:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.985 09:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:45.985 09:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.985 09:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.985 09:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.985 09:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.985 09:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.985 09:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.985 09:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.985 09:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.985 09:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.985 09:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.985 "name": "raid_bdev1", 00:11:45.985 "uuid": "de63b3d9-ca5a-4a9d-94ae-753b9fd7515c", 00:11:45.985 "strip_size_kb": 0, 00:11:45.985 "state": "online", 00:11:45.985 "raid_level": "raid1", 00:11:45.986 "superblock": true, 00:11:45.986 "num_base_bdevs": 4, 00:11:45.986 "num_base_bdevs_discovered": 3, 00:11:45.986 "num_base_bdevs_operational": 3, 00:11:45.986 "base_bdevs_list": [ 00:11:45.986 { 00:11:45.986 "name": null, 00:11:45.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.986 "is_configured": false, 00:11:45.986 "data_offset": 2048, 00:11:45.986 "data_size": 63488 00:11:45.986 }, 00:11:45.986 { 00:11:45.986 "name": "pt2", 00:11:45.986 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:45.986 "is_configured": true, 00:11:45.986 "data_offset": 2048, 00:11:45.986 "data_size": 63488 00:11:45.986 }, 00:11:45.986 { 00:11:45.986 "name": "pt3", 00:11:45.986 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:45.986 "is_configured": true, 00:11:45.986 "data_offset": 2048, 00:11:45.986 "data_size": 63488 00:11:45.986 }, 00:11:45.986 { 00:11:45.986 "name": "pt4", 00:11:45.986 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:45.986 "is_configured": true, 00:11:45.986 "data_offset": 2048, 00:11:45.986 "data_size": 63488 00:11:45.986 } 00:11:45.986 ] 00:11:45.986 }' 00:11:45.986 09:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.986 09:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.243 09:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:46.244 09:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:46.244 09:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.244 09:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.244 09:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.244 09:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:46.244 09:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:46.244 09:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.244 09:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.244 09:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:46.244 [2024-12-06 09:49:11.412557] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:46.244 09:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.244 09:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' de63b3d9-ca5a-4a9d-94ae-753b9fd7515c '!=' de63b3d9-ca5a-4a9d-94ae-753b9fd7515c ']' 00:11:46.244 09:49:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74444 00:11:46.244 09:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74444 ']' 00:11:46.244 09:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74444 00:11:46.244 09:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:46.244 09:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:46.244 09:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74444 00:11:46.244 killing process with pid 74444 00:11:46.244 09:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:46.244 09:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:46.244 09:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74444' 00:11:46.244 09:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74444 00:11:46.244 [2024-12-06 09:49:11.474565] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:46.244 [2024-12-06 09:49:11.474657] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:46.244 [2024-12-06 09:49:11.474732] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:46.244 09:49:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74444 00:11:46.244 [2024-12-06 09:49:11.474745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:46.809 [2024-12-06 09:49:11.850479] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:47.746 ************************************ 00:11:47.746 END TEST raid_superblock_test 00:11:47.746 ************************************ 00:11:47.746 09:49:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:47.746 00:11:47.746 real 0m8.166s 00:11:47.746 user 0m12.836s 00:11:47.746 sys 0m1.390s 00:11:47.746 09:49:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.746 09:49:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.746 09:49:13 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:11:47.746 09:49:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:47.746 09:49:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.746 09:49:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:48.004 ************************************ 00:11:48.004 START TEST raid_read_error_test 00:11:48.004 ************************************ 00:11:48.004 09:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:11:48.004 09:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:48.004 09:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:48.004 09:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:48.004 09:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:48.004 09:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:48.004 09:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:48.004 09:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:48.004 09:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:48.005 09:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:48.005 09:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:48.005 09:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:48.005 09:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:48.005 09:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:48.005 09:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:48.005 09:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:48.005 09:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:48.005 09:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:48.005 09:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:48.005 09:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:48.005 09:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:48.005 09:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:48.005 09:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:48.005 09:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:48.005 09:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:48.005 09:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:48.005 09:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:48.005 09:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:48.005 09:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.l4riOFLVXr 00:11:48.005 09:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74928 00:11:48.005 09:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74928 00:11:48.005 09:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:48.005 09:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74928 ']' 00:11:48.005 09:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.005 09:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:48.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.005 09:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.005 09:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:48.005 09:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.005 [2024-12-06 09:49:13.134191] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:11:48.005 [2024-12-06 09:49:13.134305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74928 ] 00:11:48.264 [2024-12-06 09:49:13.289729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.264 [2024-12-06 09:49:13.401915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.523 [2024-12-06 09:49:13.601427] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:48.523 [2024-12-06 09:49:13.601498] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:48.784 09:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:48.784 09:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:48.784 09:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:48.784 09:49:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:48.784 09:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.784 09:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.784 BaseBdev1_malloc 00:11:48.784 09:49:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.784 09:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:48.784 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.784 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.784 true 00:11:48.784 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.784 09:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:48.784 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.784 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.784 [2024-12-06 09:49:14.019483] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:48.784 [2024-12-06 09:49:14.019542] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.784 [2024-12-06 09:49:14.019563] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:48.784 [2024-12-06 09:49:14.019574] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.784 [2024-12-06 09:49:14.021759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.784 [2024-12-06 09:49:14.021799] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:48.784 BaseBdev1 00:11:48.784 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.784 09:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:48.784 09:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:48.784 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.784 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.045 BaseBdev2_malloc 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.045 true 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.045 [2024-12-06 09:49:14.085220] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:49.045 [2024-12-06 09:49:14.085279] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.045 [2024-12-06 09:49:14.085298] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:49.045 [2024-12-06 09:49:14.085308] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.045 [2024-12-06 09:49:14.087606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.045 [2024-12-06 09:49:14.087640] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:49.045 BaseBdev2 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.045 BaseBdev3_malloc 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.045 true 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.045 [2024-12-06 09:49:14.166427] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:49.045 [2024-12-06 09:49:14.166483] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.045 [2024-12-06 09:49:14.166504] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:49.045 [2024-12-06 09:49:14.166514] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.045 [2024-12-06 09:49:14.168702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.045 [2024-12-06 09:49:14.168740] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:49.045 BaseBdev3 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.045 BaseBdev4_malloc 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.045 true 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.045 [2024-12-06 09:49:14.234980] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:49.045 [2024-12-06 09:49:14.235044] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.045 [2024-12-06 09:49:14.235065] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:49.045 [2024-12-06 09:49:14.235076] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.045 [2024-12-06 09:49:14.237436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.045 [2024-12-06 09:49:14.237474] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:49.045 BaseBdev4 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.045 [2024-12-06 09:49:14.247020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:49.045 [2024-12-06 09:49:14.249048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:49.045 [2024-12-06 09:49:14.249133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:49.045 [2024-12-06 09:49:14.249210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:49.045 [2024-12-06 09:49:14.249473] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:49.045 [2024-12-06 09:49:14.249504] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:49.045 [2024-12-06 09:49:14.249772] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:49.045 [2024-12-06 09:49:14.249958] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:49.045 [2024-12-06 09:49:14.249975] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:49.045 [2024-12-06 09:49:14.250182] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.045 09:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.045 "name": "raid_bdev1", 00:11:49.045 "uuid": "b3961a8c-0688-4cb5-b00b-c2ccf0cbe804", 00:11:49.045 "strip_size_kb": 0, 00:11:49.045 "state": "online", 00:11:49.045 "raid_level": "raid1", 00:11:49.045 "superblock": true, 00:11:49.045 "num_base_bdevs": 4, 00:11:49.045 "num_base_bdevs_discovered": 4, 00:11:49.045 "num_base_bdevs_operational": 4, 00:11:49.045 "base_bdevs_list": [ 00:11:49.045 { 00:11:49.045 "name": "BaseBdev1", 00:11:49.045 "uuid": "1f1c03aa-cce0-5e0d-964d-ee2e1c39fa4d", 00:11:49.045 "is_configured": true, 00:11:49.045 "data_offset": 2048, 00:11:49.045 "data_size": 63488 00:11:49.045 }, 00:11:49.045 { 00:11:49.045 "name": "BaseBdev2", 00:11:49.045 "uuid": "46f2dd68-b3d3-5cad-9e75-b7d5074bc844", 00:11:49.045 "is_configured": true, 00:11:49.045 "data_offset": 2048, 00:11:49.045 "data_size": 63488 00:11:49.045 }, 00:11:49.045 { 00:11:49.045 "name": "BaseBdev3", 00:11:49.045 "uuid": "0f71c054-17c6-58d4-8d0b-910eed6e8ee4", 00:11:49.045 "is_configured": true, 00:11:49.045 "data_offset": 2048, 00:11:49.046 "data_size": 63488 00:11:49.046 }, 00:11:49.046 { 00:11:49.046 "name": "BaseBdev4", 00:11:49.046 "uuid": "d1810102-97d3-5a37-ace3-883f0fc336c3", 00:11:49.046 "is_configured": true, 00:11:49.046 "data_offset": 2048, 00:11:49.046 "data_size": 63488 00:11:49.046 } 00:11:49.046 ] 00:11:49.046 }' 00:11:49.046 09:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.046 09:49:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.616 09:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:49.616 09:49:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:49.616 [2024-12-06 09:49:14.779380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:50.555 09:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:50.555 09:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.555 09:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.555 09:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.555 09:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:50.555 09:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:50.555 09:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:50.555 09:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:50.555 09:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:50.555 09:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.555 09:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.555 09:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.555 09:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.555 09:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.555 09:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.555 09:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.555 09:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.555 09:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.555 09:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.555 09:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.555 09:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.555 09:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.555 09:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.555 09:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.555 "name": "raid_bdev1", 00:11:50.555 "uuid": "b3961a8c-0688-4cb5-b00b-c2ccf0cbe804", 00:11:50.555 "strip_size_kb": 0, 00:11:50.555 "state": "online", 00:11:50.555 "raid_level": "raid1", 00:11:50.555 "superblock": true, 00:11:50.555 "num_base_bdevs": 4, 00:11:50.555 "num_base_bdevs_discovered": 4, 00:11:50.555 "num_base_bdevs_operational": 4, 00:11:50.555 "base_bdevs_list": [ 00:11:50.555 { 00:11:50.555 "name": "BaseBdev1", 00:11:50.555 "uuid": "1f1c03aa-cce0-5e0d-964d-ee2e1c39fa4d", 00:11:50.555 "is_configured": true, 00:11:50.555 "data_offset": 2048, 00:11:50.555 "data_size": 63488 00:11:50.555 }, 00:11:50.555 { 00:11:50.555 "name": "BaseBdev2", 00:11:50.555 "uuid": "46f2dd68-b3d3-5cad-9e75-b7d5074bc844", 00:11:50.555 "is_configured": true, 00:11:50.555 "data_offset": 2048, 00:11:50.555 "data_size": 63488 00:11:50.555 }, 00:11:50.555 { 00:11:50.555 "name": "BaseBdev3", 00:11:50.555 "uuid": "0f71c054-17c6-58d4-8d0b-910eed6e8ee4", 00:11:50.555 "is_configured": true, 00:11:50.555 "data_offset": 2048, 00:11:50.555 "data_size": 63488 00:11:50.555 }, 00:11:50.555 { 00:11:50.555 "name": "BaseBdev4", 00:11:50.555 "uuid": "d1810102-97d3-5a37-ace3-883f0fc336c3", 00:11:50.555 "is_configured": true, 00:11:50.556 "data_offset": 2048, 00:11:50.556 "data_size": 63488 00:11:50.556 } 00:11:50.556 ] 00:11:50.556 }' 00:11:50.556 09:49:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.556 09:49:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.125 09:49:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:51.126 09:49:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.126 09:49:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.126 [2024-12-06 09:49:16.178862] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:51.126 [2024-12-06 09:49:16.178906] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:51.126 [2024-12-06 09:49:16.181716] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:51.126 [2024-12-06 09:49:16.181775] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.126 [2024-12-06 09:49:16.181907] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:51.126 [2024-12-06 09:49:16.181922] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:51.126 { 00:11:51.126 "results": [ 00:11:51.126 { 00:11:51.126 "job": "raid_bdev1", 00:11:51.126 "core_mask": "0x1", 00:11:51.126 "workload": "randrw", 00:11:51.126 "percentage": 50, 00:11:51.126 "status": "finished", 00:11:51.126 "queue_depth": 1, 00:11:51.126 "io_size": 131072, 00:11:51.126 "runtime": 1.400518, 00:11:51.126 "iops": 10695.328442761893, 00:11:51.126 "mibps": 1336.9160553452366, 00:11:51.126 "io_failed": 0, 00:11:51.126 "io_timeout": 0, 00:11:51.126 "avg_latency_us": 90.84887086462533, 00:11:51.126 "min_latency_us": 23.14061135371179, 00:11:51.126 "max_latency_us": 1581.1633187772925 00:11:51.126 } 00:11:51.126 ], 00:11:51.126 "core_count": 1 00:11:51.126 } 00:11:51.126 09:49:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.126 09:49:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74928 00:11:51.126 09:49:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74928 ']' 00:11:51.126 09:49:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74928 00:11:51.126 09:49:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:51.126 09:49:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:51.126 09:49:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74928 00:11:51.126 09:49:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:51.126 09:49:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:51.126 killing process with pid 74928 00:11:51.126 09:49:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74928' 00:11:51.126 09:49:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74928 00:11:51.126 [2024-12-06 09:49:16.214650] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:51.126 09:49:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74928 00:11:51.385 [2024-12-06 09:49:16.533435] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:52.774 09:49:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:52.774 09:49:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.l4riOFLVXr 00:11:52.774 09:49:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:52.774 09:49:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:52.774 09:49:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:52.774 09:49:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:52.774 09:49:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:52.774 09:49:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:52.774 00:11:52.774 real 0m4.676s 00:11:52.774 user 0m5.523s 00:11:52.774 sys 0m0.591s 00:11:52.774 09:49:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:52.774 09:49:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.774 ************************************ 00:11:52.774 END TEST raid_read_error_test 00:11:52.774 ************************************ 00:11:52.774 09:49:17 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:11:52.774 09:49:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:52.774 09:49:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:52.774 09:49:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:52.774 ************************************ 00:11:52.774 START TEST raid_write_error_test 00:11:52.774 ************************************ 00:11:52.774 09:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:11:52.774 09:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:52.774 09:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:52.774 09:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:52.774 09:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:52.774 09:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:52.774 09:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:52.774 09:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:52.774 09:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:52.774 09:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:52.774 09:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:52.774 09:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:52.774 09:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:52.774 09:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:52.774 09:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:52.774 09:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:52.774 09:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:52.774 09:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:52.774 09:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:52.774 09:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:52.774 09:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:52.774 09:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:52.774 09:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:52.774 09:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:52.774 09:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:52.774 09:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:52.774 09:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:52.774 09:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:52.774 09:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ykDPL6ECTL 00:11:52.774 09:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75078 00:11:52.774 09:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:52.774 09:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75078 00:11:52.774 09:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75078 ']' 00:11:52.774 09:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.774 09:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:52.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.775 09:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.775 09:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:52.775 09:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.775 [2024-12-06 09:49:17.873782] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:11:52.775 [2024-12-06 09:49:17.873897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75078 ] 00:11:52.775 [2024-12-06 09:49:18.044729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.033 [2024-12-06 09:49:18.155504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.292 [2024-12-06 09:49:18.342039] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:53.292 [2024-12-06 09:49:18.342093] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:53.551 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:53.551 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:53.551 09:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:53.551 09:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:53.551 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.551 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.551 BaseBdev1_malloc 00:11:53.551 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.551 09:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:53.551 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.551 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.551 true 00:11:53.551 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.551 09:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:53.551 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.551 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.551 [2024-12-06 09:49:18.777088] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:53.551 [2024-12-06 09:49:18.777156] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.551 [2024-12-06 09:49:18.777180] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:53.551 [2024-12-06 09:49:18.777191] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.551 [2024-12-06 09:49:18.779318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.551 [2024-12-06 09:49:18.779356] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:53.551 BaseBdev1 00:11:53.551 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.551 09:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:53.551 09:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:53.551 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.551 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.810 BaseBdev2_malloc 00:11:53.810 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.810 09:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:53.810 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.810 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.810 true 00:11:53.810 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.810 09:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:53.810 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.810 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.810 [2024-12-06 09:49:18.844360] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:53.810 [2024-12-06 09:49:18.844418] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.810 [2024-12-06 09:49:18.844438] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:53.810 [2024-12-06 09:49:18.844448] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.810 [2024-12-06 09:49:18.846584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.810 [2024-12-06 09:49:18.846620] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:53.810 BaseBdev2 00:11:53.810 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.810 09:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:53.810 09:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:53.810 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.810 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.810 BaseBdev3_malloc 00:11:53.810 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.810 09:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:53.810 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.810 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.810 true 00:11:53.810 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.810 09:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:53.810 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.810 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.811 [2024-12-06 09:49:18.926053] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:53.811 [2024-12-06 09:49:18.926112] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.811 [2024-12-06 09:49:18.926133] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:53.811 [2024-12-06 09:49:18.926153] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.811 [2024-12-06 09:49:18.928276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.811 [2024-12-06 09:49:18.928314] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:53.811 BaseBdev3 00:11:53.811 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.811 09:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:53.811 09:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:53.811 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.811 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.811 BaseBdev4_malloc 00:11:53.811 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.811 09:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:53.811 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.811 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.811 true 00:11:53.811 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.811 09:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:53.811 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.811 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.811 [2024-12-06 09:49:18.993621] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:53.811 [2024-12-06 09:49:18.993700] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.811 [2024-12-06 09:49:18.993721] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:53.811 [2024-12-06 09:49:18.993731] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.811 [2024-12-06 09:49:18.995824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.811 [2024-12-06 09:49:18.995863] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:53.811 BaseBdev4 00:11:53.811 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.811 09:49:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:53.811 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.811 09:49:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.811 [2024-12-06 09:49:19.005640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:53.811 [2024-12-06 09:49:19.007496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:53.811 [2024-12-06 09:49:19.007599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:53.811 [2024-12-06 09:49:19.007667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:53.811 [2024-12-06 09:49:19.007976] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:53.811 [2024-12-06 09:49:19.007997] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:53.811 [2024-12-06 09:49:19.008301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:53.811 [2024-12-06 09:49:19.008521] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:53.811 [2024-12-06 09:49:19.008539] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:53.811 [2024-12-06 09:49:19.008737] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.811 09:49:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.811 09:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:53.811 09:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.811 09:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.811 09:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.811 09:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.811 09:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.811 09:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.811 09:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.811 09:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.811 09:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.811 09:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.811 09:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.811 09:49:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.811 09:49:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.811 09:49:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.811 09:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.811 "name": "raid_bdev1", 00:11:53.811 "uuid": "4daa7d1a-bf43-4658-b001-e671ced74ca0", 00:11:53.811 "strip_size_kb": 0, 00:11:53.811 "state": "online", 00:11:53.811 "raid_level": "raid1", 00:11:53.811 "superblock": true, 00:11:53.811 "num_base_bdevs": 4, 00:11:53.811 "num_base_bdevs_discovered": 4, 00:11:53.811 "num_base_bdevs_operational": 4, 00:11:53.811 "base_bdevs_list": [ 00:11:53.811 { 00:11:53.811 "name": "BaseBdev1", 00:11:53.811 "uuid": "ce287d33-33ea-529e-b800-20769bab5fb7", 00:11:53.811 "is_configured": true, 00:11:53.811 "data_offset": 2048, 00:11:53.811 "data_size": 63488 00:11:53.811 }, 00:11:53.811 { 00:11:53.811 "name": "BaseBdev2", 00:11:53.811 "uuid": "b5a7221b-1073-5b2b-9209-305361e2e20c", 00:11:53.811 "is_configured": true, 00:11:53.811 "data_offset": 2048, 00:11:53.811 "data_size": 63488 00:11:53.811 }, 00:11:53.811 { 00:11:53.811 "name": "BaseBdev3", 00:11:53.811 "uuid": "f81c2d3a-aa0a-56de-a616-bfc1560f9d4f", 00:11:53.811 "is_configured": true, 00:11:53.811 "data_offset": 2048, 00:11:53.811 "data_size": 63488 00:11:53.811 }, 00:11:53.811 { 00:11:53.811 "name": "BaseBdev4", 00:11:53.811 "uuid": "ff0de8b2-c0c2-5fa1-8158-45ba238dcaf9", 00:11:53.811 "is_configured": true, 00:11:53.811 "data_offset": 2048, 00:11:53.811 "data_size": 63488 00:11:53.811 } 00:11:53.811 ] 00:11:53.811 }' 00:11:53.811 09:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.811 09:49:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.380 09:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:54.380 09:49:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:54.380 [2024-12-06 09:49:19.466103] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:55.317 09:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:55.317 09:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.317 09:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.317 [2024-12-06 09:49:20.384163] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:55.317 [2024-12-06 09:49:20.384217] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:55.317 [2024-12-06 09:49:20.384443] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:11:55.317 09:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.317 09:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:55.317 09:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:55.317 09:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:55.317 09:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:55.317 09:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:55.317 09:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:55.317 09:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:55.317 09:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.317 09:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.317 09:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:55.317 09:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.317 09:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.317 09:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.317 09:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.317 09:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.317 09:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.317 09:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.317 09:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.317 09:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.317 09:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.317 "name": "raid_bdev1", 00:11:55.317 "uuid": "4daa7d1a-bf43-4658-b001-e671ced74ca0", 00:11:55.317 "strip_size_kb": 0, 00:11:55.317 "state": "online", 00:11:55.317 "raid_level": "raid1", 00:11:55.317 "superblock": true, 00:11:55.317 "num_base_bdevs": 4, 00:11:55.317 "num_base_bdevs_discovered": 3, 00:11:55.317 "num_base_bdevs_operational": 3, 00:11:55.317 "base_bdevs_list": [ 00:11:55.317 { 00:11:55.317 "name": null, 00:11:55.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.317 "is_configured": false, 00:11:55.317 "data_offset": 0, 00:11:55.317 "data_size": 63488 00:11:55.317 }, 00:11:55.317 { 00:11:55.317 "name": "BaseBdev2", 00:11:55.317 "uuid": "b5a7221b-1073-5b2b-9209-305361e2e20c", 00:11:55.317 "is_configured": true, 00:11:55.317 "data_offset": 2048, 00:11:55.317 "data_size": 63488 00:11:55.317 }, 00:11:55.317 { 00:11:55.317 "name": "BaseBdev3", 00:11:55.317 "uuid": "f81c2d3a-aa0a-56de-a616-bfc1560f9d4f", 00:11:55.317 "is_configured": true, 00:11:55.317 "data_offset": 2048, 00:11:55.317 "data_size": 63488 00:11:55.317 }, 00:11:55.317 { 00:11:55.317 "name": "BaseBdev4", 00:11:55.317 "uuid": "ff0de8b2-c0c2-5fa1-8158-45ba238dcaf9", 00:11:55.317 "is_configured": true, 00:11:55.317 "data_offset": 2048, 00:11:55.317 "data_size": 63488 00:11:55.317 } 00:11:55.317 ] 00:11:55.317 }' 00:11:55.317 09:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.317 09:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.578 09:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:55.578 09:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.578 09:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.578 [2024-12-06 09:49:20.791451] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:55.578 [2024-12-06 09:49:20.791489] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:55.578 [2024-12-06 09:49:20.794737] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:55.578 [2024-12-06 09:49:20.794789] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.578 [2024-12-06 09:49:20.794905] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:55.578 [2024-12-06 09:49:20.794925] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:55.578 { 00:11:55.578 "results": [ 00:11:55.578 { 00:11:55.578 "job": "raid_bdev1", 00:11:55.578 "core_mask": "0x1", 00:11:55.578 "workload": "randrw", 00:11:55.578 "percentage": 50, 00:11:55.578 "status": "finished", 00:11:55.578 "queue_depth": 1, 00:11:55.578 "io_size": 131072, 00:11:55.578 "runtime": 1.326348, 00:11:55.578 "iops": 11477.379993787452, 00:11:55.578 "mibps": 1434.6724992234315, 00:11:55.578 "io_failed": 0, 00:11:55.578 "io_timeout": 0, 00:11:55.578 "avg_latency_us": 84.49211974411278, 00:11:55.578 "min_latency_us": 22.805240174672488, 00:11:55.578 "max_latency_us": 1359.3711790393013 00:11:55.578 } 00:11:55.578 ], 00:11:55.578 "core_count": 1 00:11:55.578 } 00:11:55.578 09:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.578 09:49:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75078 00:11:55.578 09:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75078 ']' 00:11:55.578 09:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75078 00:11:55.578 09:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:55.578 09:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:55.578 09:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75078 00:11:55.578 09:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:55.578 09:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:55.578 killing process with pid 75078 00:11:55.578 09:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75078' 00:11:55.578 09:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75078 00:11:55.578 [2024-12-06 09:49:20.840071] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:55.578 09:49:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75078 00:11:56.148 [2024-12-06 09:49:21.150035] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:57.088 09:49:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ykDPL6ECTL 00:11:57.088 09:49:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:57.088 09:49:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:57.088 09:49:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:57.088 09:49:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:57.088 09:49:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:57.088 09:49:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:57.088 09:49:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:57.088 00:11:57.088 real 0m4.559s 00:11:57.088 user 0m5.318s 00:11:57.088 sys 0m0.556s 00:11:57.088 09:49:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.088 09:49:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.088 ************************************ 00:11:57.088 END TEST raid_write_error_test 00:11:57.088 ************************************ 00:11:57.348 09:49:22 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:11:57.348 09:49:22 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:11:57.348 09:49:22 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:11:57.348 09:49:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:57.348 09:49:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.348 09:49:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:57.348 ************************************ 00:11:57.348 START TEST raid_rebuild_test 00:11:57.348 ************************************ 00:11:57.348 09:49:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:11:57.348 09:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:57.348 09:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:57.348 09:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:57.348 09:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:57.348 09:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:57.348 09:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:57.348 09:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:57.348 09:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:57.348 09:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:57.348 09:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:57.348 09:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:57.348 09:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:57.348 09:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:57.348 09:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:57.348 09:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:57.348 09:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:57.348 09:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:57.348 09:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:57.348 09:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:57.348 09:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:57.348 09:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:57.348 09:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:57.348 09:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:57.348 09:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75222 00:11:57.348 09:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75222 00:11:57.348 09:49:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:57.348 09:49:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75222 ']' 00:11:57.348 09:49:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.348 09:49:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:57.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.348 09:49:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.348 09:49:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:57.348 09:49:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.348 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:57.348 Zero copy mechanism will not be used. 00:11:57.348 [2024-12-06 09:49:22.490984] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:11:57.348 [2024-12-06 09:49:22.491102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75222 ] 00:11:57.608 [2024-12-06 09:49:22.663983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.608 [2024-12-06 09:49:22.772475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.868 [2024-12-06 09:49:22.967849] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:57.868 [2024-12-06 09:49:22.967909] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:58.128 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:58.128 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:11:58.128 09:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:58.128 09:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:58.128 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.128 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.128 BaseBdev1_malloc 00:11:58.128 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.128 09:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:58.128 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.128 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.128 [2024-12-06 09:49:23.358420] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:58.128 [2024-12-06 09:49:23.358477] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.128 [2024-12-06 09:49:23.358499] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:58.128 [2024-12-06 09:49:23.358511] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.128 [2024-12-06 09:49:23.360583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.128 [2024-12-06 09:49:23.360619] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:58.128 BaseBdev1 00:11:58.128 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.128 09:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:58.128 09:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:58.128 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.128 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.389 BaseBdev2_malloc 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.389 [2024-12-06 09:49:23.412980] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:58.389 [2024-12-06 09:49:23.413035] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.389 [2024-12-06 09:49:23.413056] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:58.389 [2024-12-06 09:49:23.413067] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.389 [2024-12-06 09:49:23.415084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.389 [2024-12-06 09:49:23.415119] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:58.389 BaseBdev2 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.389 spare_malloc 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.389 spare_delay 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.389 [2024-12-06 09:49:23.486570] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:58.389 [2024-12-06 09:49:23.486642] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.389 [2024-12-06 09:49:23.486662] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:58.389 [2024-12-06 09:49:23.486673] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.389 [2024-12-06 09:49:23.488905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.389 [2024-12-06 09:49:23.488940] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:58.389 spare 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.389 [2024-12-06 09:49:23.498597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:58.389 [2024-12-06 09:49:23.500403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:58.389 [2024-12-06 09:49:23.500491] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:58.389 [2024-12-06 09:49:23.500505] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:58.389 [2024-12-06 09:49:23.500750] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:58.389 [2024-12-06 09:49:23.500909] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:58.389 [2024-12-06 09:49:23.500924] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:58.389 [2024-12-06 09:49:23.501080] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.389 "name": "raid_bdev1", 00:11:58.389 "uuid": "92b304ce-c21b-46d9-9ea3-8505d1b9c5d1", 00:11:58.389 "strip_size_kb": 0, 00:11:58.389 "state": "online", 00:11:58.389 "raid_level": "raid1", 00:11:58.389 "superblock": false, 00:11:58.389 "num_base_bdevs": 2, 00:11:58.389 "num_base_bdevs_discovered": 2, 00:11:58.389 "num_base_bdevs_operational": 2, 00:11:58.389 "base_bdevs_list": [ 00:11:58.389 { 00:11:58.389 "name": "BaseBdev1", 00:11:58.389 "uuid": "6f11d069-f704-5943-bbf0-c47bf3952df3", 00:11:58.389 "is_configured": true, 00:11:58.389 "data_offset": 0, 00:11:58.389 "data_size": 65536 00:11:58.389 }, 00:11:58.389 { 00:11:58.389 "name": "BaseBdev2", 00:11:58.389 "uuid": "736d1cc8-1967-5266-ba29-01fbd63dbf4b", 00:11:58.389 "is_configured": true, 00:11:58.389 "data_offset": 0, 00:11:58.389 "data_size": 65536 00:11:58.389 } 00:11:58.389 ] 00:11:58.389 }' 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.389 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.959 09:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:58.959 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.959 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.959 09:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:58.959 [2024-12-06 09:49:23.954095] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:58.959 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.959 09:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:58.959 09:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.959 09:49:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:58.959 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.959 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.959 09:49:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.959 09:49:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:58.959 09:49:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:58.959 09:49:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:58.959 09:49:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:58.959 09:49:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:58.959 09:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:58.959 09:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:58.959 09:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:58.959 09:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:58.959 09:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:58.959 09:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:58.959 09:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:58.959 09:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:58.959 09:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:59.218 [2024-12-06 09:49:24.249376] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:59.218 /dev/nbd0 00:11:59.218 09:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:59.218 09:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:59.218 09:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:59.218 09:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:59.218 09:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:59.218 09:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:59.218 09:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:59.218 09:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:59.218 09:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:59.218 09:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:59.218 09:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:59.218 1+0 records in 00:11:59.218 1+0 records out 00:11:59.218 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239173 s, 17.1 MB/s 00:11:59.218 09:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:59.218 09:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:59.218 09:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:59.218 09:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:59.218 09:49:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:59.218 09:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:59.218 09:49:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:59.218 09:49:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:59.218 09:49:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:59.218 09:49:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:03.429 65536+0 records in 00:12:03.429 65536+0 records out 00:12:03.429 33554432 bytes (34 MB, 32 MiB) copied, 3.79992 s, 8.8 MB/s 00:12:03.429 09:49:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:03.429 09:49:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:03.429 09:49:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:03.429 09:49:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:03.429 09:49:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:03.429 09:49:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:03.429 09:49:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:03.429 09:49:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:03.429 [2024-12-06 09:49:28.323233] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.429 09:49:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:03.429 09:49:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:03.429 09:49:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:03.429 09:49:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:03.429 09:49:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:03.429 09:49:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:03.429 09:49:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:03.429 09:49:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:03.429 09:49:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.429 09:49:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.429 [2024-12-06 09:49:28.339305] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:03.429 09:49:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.429 09:49:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:03.429 09:49:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.429 09:49:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.429 09:49:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.429 09:49:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.429 09:49:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:03.429 09:49:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.429 09:49:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.429 09:49:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.429 09:49:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.429 09:49:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.429 09:49:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.429 09:49:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.429 09:49:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.429 09:49:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.429 09:49:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.429 "name": "raid_bdev1", 00:12:03.429 "uuid": "92b304ce-c21b-46d9-9ea3-8505d1b9c5d1", 00:12:03.429 "strip_size_kb": 0, 00:12:03.429 "state": "online", 00:12:03.429 "raid_level": "raid1", 00:12:03.429 "superblock": false, 00:12:03.429 "num_base_bdevs": 2, 00:12:03.429 "num_base_bdevs_discovered": 1, 00:12:03.429 "num_base_bdevs_operational": 1, 00:12:03.429 "base_bdevs_list": [ 00:12:03.429 { 00:12:03.429 "name": null, 00:12:03.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.429 "is_configured": false, 00:12:03.429 "data_offset": 0, 00:12:03.429 "data_size": 65536 00:12:03.429 }, 00:12:03.429 { 00:12:03.429 "name": "BaseBdev2", 00:12:03.430 "uuid": "736d1cc8-1967-5266-ba29-01fbd63dbf4b", 00:12:03.430 "is_configured": true, 00:12:03.430 "data_offset": 0, 00:12:03.430 "data_size": 65536 00:12:03.430 } 00:12:03.430 ] 00:12:03.430 }' 00:12:03.430 09:49:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.430 09:49:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.688 09:49:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:03.688 09:49:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.688 09:49:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.688 [2024-12-06 09:49:28.782582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:03.688 [2024-12-06 09:49:28.798927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:03.688 09:49:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.688 09:49:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:03.688 [2024-12-06 09:49:28.800794] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:04.625 09:49:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:04.626 09:49:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:04.626 09:49:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:04.626 09:49:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:04.626 09:49:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:04.626 09:49:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.626 09:49:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.626 09:49:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.626 09:49:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.626 09:49:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.626 09:49:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:04.626 "name": "raid_bdev1", 00:12:04.626 "uuid": "92b304ce-c21b-46d9-9ea3-8505d1b9c5d1", 00:12:04.626 "strip_size_kb": 0, 00:12:04.626 "state": "online", 00:12:04.626 "raid_level": "raid1", 00:12:04.626 "superblock": false, 00:12:04.626 "num_base_bdevs": 2, 00:12:04.626 "num_base_bdevs_discovered": 2, 00:12:04.626 "num_base_bdevs_operational": 2, 00:12:04.626 "process": { 00:12:04.626 "type": "rebuild", 00:12:04.626 "target": "spare", 00:12:04.626 "progress": { 00:12:04.626 "blocks": 20480, 00:12:04.626 "percent": 31 00:12:04.626 } 00:12:04.626 }, 00:12:04.626 "base_bdevs_list": [ 00:12:04.626 { 00:12:04.626 "name": "spare", 00:12:04.626 "uuid": "74063a53-ba17-5164-b9a4-00a94128544a", 00:12:04.626 "is_configured": true, 00:12:04.626 "data_offset": 0, 00:12:04.626 "data_size": 65536 00:12:04.626 }, 00:12:04.626 { 00:12:04.626 "name": "BaseBdev2", 00:12:04.626 "uuid": "736d1cc8-1967-5266-ba29-01fbd63dbf4b", 00:12:04.626 "is_configured": true, 00:12:04.626 "data_offset": 0, 00:12:04.626 "data_size": 65536 00:12:04.626 } 00:12:04.626 ] 00:12:04.626 }' 00:12:04.626 09:49:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:04.885 09:49:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:04.885 09:49:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:04.885 09:49:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:04.885 09:49:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:04.885 09:49:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.885 09:49:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.885 [2024-12-06 09:49:29.964807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:04.885 [2024-12-06 09:49:30.005815] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:04.885 [2024-12-06 09:49:30.005885] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.885 [2024-12-06 09:49:30.005899] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:04.885 [2024-12-06 09:49:30.005908] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:04.885 09:49:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.885 09:49:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:04.885 09:49:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.885 09:49:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.885 09:49:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.885 09:49:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.885 09:49:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:04.885 09:49:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.885 09:49:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.885 09:49:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.885 09:49:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.885 09:49:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.885 09:49:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.885 09:49:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.885 09:49:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.885 09:49:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.885 09:49:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.885 "name": "raid_bdev1", 00:12:04.885 "uuid": "92b304ce-c21b-46d9-9ea3-8505d1b9c5d1", 00:12:04.885 "strip_size_kb": 0, 00:12:04.885 "state": "online", 00:12:04.885 "raid_level": "raid1", 00:12:04.885 "superblock": false, 00:12:04.885 "num_base_bdevs": 2, 00:12:04.885 "num_base_bdevs_discovered": 1, 00:12:04.885 "num_base_bdevs_operational": 1, 00:12:04.885 "base_bdevs_list": [ 00:12:04.885 { 00:12:04.885 "name": null, 00:12:04.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.885 "is_configured": false, 00:12:04.885 "data_offset": 0, 00:12:04.885 "data_size": 65536 00:12:04.885 }, 00:12:04.885 { 00:12:04.885 "name": "BaseBdev2", 00:12:04.885 "uuid": "736d1cc8-1967-5266-ba29-01fbd63dbf4b", 00:12:04.885 "is_configured": true, 00:12:04.885 "data_offset": 0, 00:12:04.885 "data_size": 65536 00:12:04.885 } 00:12:04.885 ] 00:12:04.885 }' 00:12:04.885 09:49:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.885 09:49:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.453 09:49:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:05.453 09:49:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:05.453 09:49:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:05.453 09:49:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:05.453 09:49:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:05.453 09:49:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.453 09:49:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.453 09:49:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.453 09:49:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.453 09:49:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.453 09:49:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:05.453 "name": "raid_bdev1", 00:12:05.453 "uuid": "92b304ce-c21b-46d9-9ea3-8505d1b9c5d1", 00:12:05.453 "strip_size_kb": 0, 00:12:05.453 "state": "online", 00:12:05.453 "raid_level": "raid1", 00:12:05.453 "superblock": false, 00:12:05.453 "num_base_bdevs": 2, 00:12:05.453 "num_base_bdevs_discovered": 1, 00:12:05.453 "num_base_bdevs_operational": 1, 00:12:05.453 "base_bdevs_list": [ 00:12:05.453 { 00:12:05.453 "name": null, 00:12:05.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.453 "is_configured": false, 00:12:05.453 "data_offset": 0, 00:12:05.453 "data_size": 65536 00:12:05.453 }, 00:12:05.453 { 00:12:05.453 "name": "BaseBdev2", 00:12:05.453 "uuid": "736d1cc8-1967-5266-ba29-01fbd63dbf4b", 00:12:05.453 "is_configured": true, 00:12:05.453 "data_offset": 0, 00:12:05.453 "data_size": 65536 00:12:05.453 } 00:12:05.453 ] 00:12:05.453 }' 00:12:05.453 09:49:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:05.453 09:49:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:05.453 09:49:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:05.454 09:49:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:05.454 09:49:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:05.454 09:49:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.454 09:49:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.454 [2024-12-06 09:49:30.663630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:05.454 [2024-12-06 09:49:30.678922] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:05.454 09:49:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.454 09:49:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:05.454 [2024-12-06 09:49:30.680695] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:06.437 09:49:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:06.437 09:49:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:06.437 09:49:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:06.437 09:49:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:06.437 09:49:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:06.437 09:49:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.437 09:49:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.437 09:49:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.437 09:49:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.696 09:49:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.696 09:49:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:06.696 "name": "raid_bdev1", 00:12:06.696 "uuid": "92b304ce-c21b-46d9-9ea3-8505d1b9c5d1", 00:12:06.696 "strip_size_kb": 0, 00:12:06.696 "state": "online", 00:12:06.696 "raid_level": "raid1", 00:12:06.696 "superblock": false, 00:12:06.696 "num_base_bdevs": 2, 00:12:06.696 "num_base_bdevs_discovered": 2, 00:12:06.696 "num_base_bdevs_operational": 2, 00:12:06.696 "process": { 00:12:06.696 "type": "rebuild", 00:12:06.696 "target": "spare", 00:12:06.696 "progress": { 00:12:06.696 "blocks": 20480, 00:12:06.696 "percent": 31 00:12:06.696 } 00:12:06.696 }, 00:12:06.696 "base_bdevs_list": [ 00:12:06.696 { 00:12:06.696 "name": "spare", 00:12:06.696 "uuid": "74063a53-ba17-5164-b9a4-00a94128544a", 00:12:06.696 "is_configured": true, 00:12:06.696 "data_offset": 0, 00:12:06.696 "data_size": 65536 00:12:06.696 }, 00:12:06.696 { 00:12:06.696 "name": "BaseBdev2", 00:12:06.696 "uuid": "736d1cc8-1967-5266-ba29-01fbd63dbf4b", 00:12:06.696 "is_configured": true, 00:12:06.696 "data_offset": 0, 00:12:06.696 "data_size": 65536 00:12:06.696 } 00:12:06.696 ] 00:12:06.696 }' 00:12:06.696 09:49:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:06.696 09:49:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:06.696 09:49:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:06.696 09:49:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:06.696 09:49:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:06.696 09:49:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:06.696 09:49:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:06.696 09:49:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:06.696 09:49:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=365 00:12:06.697 09:49:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:06.697 09:49:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:06.697 09:49:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:06.697 09:49:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:06.697 09:49:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:06.697 09:49:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:06.697 09:49:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.697 09:49:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.697 09:49:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.697 09:49:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.697 09:49:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.697 09:49:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:06.697 "name": "raid_bdev1", 00:12:06.697 "uuid": "92b304ce-c21b-46d9-9ea3-8505d1b9c5d1", 00:12:06.697 "strip_size_kb": 0, 00:12:06.697 "state": "online", 00:12:06.697 "raid_level": "raid1", 00:12:06.697 "superblock": false, 00:12:06.697 "num_base_bdevs": 2, 00:12:06.697 "num_base_bdevs_discovered": 2, 00:12:06.697 "num_base_bdevs_operational": 2, 00:12:06.697 "process": { 00:12:06.697 "type": "rebuild", 00:12:06.697 "target": "spare", 00:12:06.697 "progress": { 00:12:06.697 "blocks": 22528, 00:12:06.697 "percent": 34 00:12:06.697 } 00:12:06.697 }, 00:12:06.697 "base_bdevs_list": [ 00:12:06.697 { 00:12:06.697 "name": "spare", 00:12:06.697 "uuid": "74063a53-ba17-5164-b9a4-00a94128544a", 00:12:06.697 "is_configured": true, 00:12:06.697 "data_offset": 0, 00:12:06.697 "data_size": 65536 00:12:06.697 }, 00:12:06.697 { 00:12:06.697 "name": "BaseBdev2", 00:12:06.697 "uuid": "736d1cc8-1967-5266-ba29-01fbd63dbf4b", 00:12:06.697 "is_configured": true, 00:12:06.697 "data_offset": 0, 00:12:06.697 "data_size": 65536 00:12:06.697 } 00:12:06.697 ] 00:12:06.697 }' 00:12:06.697 09:49:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:06.697 09:49:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:06.697 09:49:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:06.697 09:49:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:06.697 09:49:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:08.075 09:49:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:08.075 09:49:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:08.075 09:49:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:08.075 09:49:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:08.075 09:49:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:08.075 09:49:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:08.075 09:49:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.075 09:49:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.075 09:49:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.075 09:49:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.075 09:49:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.075 09:49:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:08.075 "name": "raid_bdev1", 00:12:08.075 "uuid": "92b304ce-c21b-46d9-9ea3-8505d1b9c5d1", 00:12:08.075 "strip_size_kb": 0, 00:12:08.075 "state": "online", 00:12:08.075 "raid_level": "raid1", 00:12:08.075 "superblock": false, 00:12:08.075 "num_base_bdevs": 2, 00:12:08.075 "num_base_bdevs_discovered": 2, 00:12:08.075 "num_base_bdevs_operational": 2, 00:12:08.075 "process": { 00:12:08.075 "type": "rebuild", 00:12:08.075 "target": "spare", 00:12:08.075 "progress": { 00:12:08.075 "blocks": 45056, 00:12:08.075 "percent": 68 00:12:08.075 } 00:12:08.075 }, 00:12:08.075 "base_bdevs_list": [ 00:12:08.075 { 00:12:08.075 "name": "spare", 00:12:08.075 "uuid": "74063a53-ba17-5164-b9a4-00a94128544a", 00:12:08.075 "is_configured": true, 00:12:08.075 "data_offset": 0, 00:12:08.075 "data_size": 65536 00:12:08.075 }, 00:12:08.075 { 00:12:08.075 "name": "BaseBdev2", 00:12:08.075 "uuid": "736d1cc8-1967-5266-ba29-01fbd63dbf4b", 00:12:08.075 "is_configured": true, 00:12:08.075 "data_offset": 0, 00:12:08.075 "data_size": 65536 00:12:08.075 } 00:12:08.075 ] 00:12:08.075 }' 00:12:08.075 09:49:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:08.075 09:49:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:08.075 09:49:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:08.075 09:49:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:08.075 09:49:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:08.644 [2024-12-06 09:49:33.894217] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:08.644 [2024-12-06 09:49:33.894323] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:08.644 [2024-12-06 09:49:33.894364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.905 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:08.905 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:08.905 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:08.905 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:08.905 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:08.905 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:08.905 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.905 09:49:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.905 09:49:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.905 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.905 09:49:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.905 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:08.905 "name": "raid_bdev1", 00:12:08.905 "uuid": "92b304ce-c21b-46d9-9ea3-8505d1b9c5d1", 00:12:08.905 "strip_size_kb": 0, 00:12:08.905 "state": "online", 00:12:08.905 "raid_level": "raid1", 00:12:08.905 "superblock": false, 00:12:08.905 "num_base_bdevs": 2, 00:12:08.905 "num_base_bdevs_discovered": 2, 00:12:08.905 "num_base_bdevs_operational": 2, 00:12:08.905 "base_bdevs_list": [ 00:12:08.905 { 00:12:08.905 "name": "spare", 00:12:08.905 "uuid": "74063a53-ba17-5164-b9a4-00a94128544a", 00:12:08.905 "is_configured": true, 00:12:08.905 "data_offset": 0, 00:12:08.905 "data_size": 65536 00:12:08.905 }, 00:12:08.905 { 00:12:08.905 "name": "BaseBdev2", 00:12:08.905 "uuid": "736d1cc8-1967-5266-ba29-01fbd63dbf4b", 00:12:08.905 "is_configured": true, 00:12:08.905 "data_offset": 0, 00:12:08.905 "data_size": 65536 00:12:08.905 } 00:12:08.905 ] 00:12:08.905 }' 00:12:08.905 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:08.905 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:08.905 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:09.165 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:09.166 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:09.166 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:09.166 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:09.166 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:09.166 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:09.166 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:09.166 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.166 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.166 09:49:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.166 09:49:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.166 09:49:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.166 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:09.166 "name": "raid_bdev1", 00:12:09.166 "uuid": "92b304ce-c21b-46d9-9ea3-8505d1b9c5d1", 00:12:09.166 "strip_size_kb": 0, 00:12:09.166 "state": "online", 00:12:09.166 "raid_level": "raid1", 00:12:09.166 "superblock": false, 00:12:09.166 "num_base_bdevs": 2, 00:12:09.166 "num_base_bdevs_discovered": 2, 00:12:09.166 "num_base_bdevs_operational": 2, 00:12:09.166 "base_bdevs_list": [ 00:12:09.166 { 00:12:09.166 "name": "spare", 00:12:09.166 "uuid": "74063a53-ba17-5164-b9a4-00a94128544a", 00:12:09.166 "is_configured": true, 00:12:09.166 "data_offset": 0, 00:12:09.166 "data_size": 65536 00:12:09.166 }, 00:12:09.166 { 00:12:09.166 "name": "BaseBdev2", 00:12:09.166 "uuid": "736d1cc8-1967-5266-ba29-01fbd63dbf4b", 00:12:09.166 "is_configured": true, 00:12:09.166 "data_offset": 0, 00:12:09.166 "data_size": 65536 00:12:09.166 } 00:12:09.166 ] 00:12:09.166 }' 00:12:09.166 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:09.166 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:09.166 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:09.166 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:09.166 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:09.166 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:09.166 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.166 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.166 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.166 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:09.166 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.166 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.166 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.166 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.166 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.166 09:49:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.166 09:49:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.166 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.166 09:49:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.166 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.166 "name": "raid_bdev1", 00:12:09.166 "uuid": "92b304ce-c21b-46d9-9ea3-8505d1b9c5d1", 00:12:09.166 "strip_size_kb": 0, 00:12:09.166 "state": "online", 00:12:09.166 "raid_level": "raid1", 00:12:09.166 "superblock": false, 00:12:09.166 "num_base_bdevs": 2, 00:12:09.166 "num_base_bdevs_discovered": 2, 00:12:09.166 "num_base_bdevs_operational": 2, 00:12:09.166 "base_bdevs_list": [ 00:12:09.166 { 00:12:09.166 "name": "spare", 00:12:09.166 "uuid": "74063a53-ba17-5164-b9a4-00a94128544a", 00:12:09.166 "is_configured": true, 00:12:09.166 "data_offset": 0, 00:12:09.166 "data_size": 65536 00:12:09.166 }, 00:12:09.166 { 00:12:09.166 "name": "BaseBdev2", 00:12:09.166 "uuid": "736d1cc8-1967-5266-ba29-01fbd63dbf4b", 00:12:09.166 "is_configured": true, 00:12:09.166 "data_offset": 0, 00:12:09.166 "data_size": 65536 00:12:09.166 } 00:12:09.166 ] 00:12:09.166 }' 00:12:09.166 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.166 09:49:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.736 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:09.736 09:49:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.736 09:49:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.736 [2024-12-06 09:49:34.808307] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:09.736 [2024-12-06 09:49:34.808345] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:09.736 [2024-12-06 09:49:34.808430] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:09.736 [2024-12-06 09:49:34.808495] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:09.736 [2024-12-06 09:49:34.808505] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:09.736 09:49:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.736 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:09.736 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.736 09:49:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.736 09:49:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.736 09:49:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.736 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:09.736 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:09.736 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:09.736 09:49:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:09.736 09:49:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:09.736 09:49:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:09.736 09:49:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:09.736 09:49:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:09.736 09:49:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:09.736 09:49:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:09.736 09:49:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:09.736 09:49:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:09.736 09:49:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:09.995 /dev/nbd0 00:12:09.995 09:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:09.995 09:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:09.995 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:09.995 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:09.995 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:09.995 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:09.995 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:09.995 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:09.995 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:09.995 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:09.995 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:09.995 1+0 records in 00:12:09.995 1+0 records out 00:12:09.995 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036637 s, 11.2 MB/s 00:12:09.995 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:09.995 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:09.995 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:09.995 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:09.995 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:09.995 09:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:09.995 09:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:09.995 09:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:10.254 /dev/nbd1 00:12:10.254 09:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:10.254 09:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:10.254 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:10.254 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:10.254 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:10.254 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:10.254 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:10.254 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:10.254 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:10.254 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:10.254 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:10.254 1+0 records in 00:12:10.254 1+0 records out 00:12:10.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226779 s, 18.1 MB/s 00:12:10.254 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:10.254 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:10.254 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:10.254 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:10.254 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:10.254 09:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:10.254 09:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:10.254 09:49:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:10.254 09:49:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:10.254 09:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:10.254 09:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:10.254 09:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:10.254 09:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:10.254 09:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:10.254 09:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:10.513 09:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:10.513 09:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:10.513 09:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:10.513 09:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:10.513 09:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:10.513 09:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:10.513 09:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:10.513 09:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:10.513 09:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:10.513 09:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:10.771 09:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:10.771 09:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:10.771 09:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:10.771 09:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:10.771 09:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:10.771 09:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:10.771 09:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:10.771 09:49:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:10.771 09:49:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:10.771 09:49:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75222 00:12:10.771 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75222 ']' 00:12:10.771 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75222 00:12:10.771 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:12:10.771 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:10.771 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75222 00:12:10.771 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:10.771 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:10.771 killing process with pid 75222 00:12:10.771 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75222' 00:12:10.771 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75222 00:12:10.771 Received shutdown signal, test time was about 60.000000 seconds 00:12:10.771 00:12:10.771 Latency(us) 00:12:10.771 [2024-12-06T09:49:36.044Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:10.771 [2024-12-06T09:49:36.044Z] =================================================================================================================== 00:12:10.771 [2024-12-06T09:49:36.044Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:10.771 [2024-12-06 09:49:35.994421] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:10.771 09:49:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75222 00:12:11.029 [2024-12-06 09:49:36.287847] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:12.405 ************************************ 00:12:12.405 END TEST raid_rebuild_test 00:12:12.405 ************************************ 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:12.405 00:12:12.405 real 0m14.971s 00:12:12.405 user 0m17.325s 00:12:12.405 sys 0m2.710s 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.405 09:49:37 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:12.405 09:49:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:12.405 09:49:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.405 09:49:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:12.405 ************************************ 00:12:12.405 START TEST raid_rebuild_test_sb 00:12:12.405 ************************************ 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75634 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75634 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75634 ']' 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:12.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:12.405 09:49:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.405 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:12.405 Zero copy mechanism will not be used. 00:12:12.405 [2024-12-06 09:49:37.521890] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:12:12.405 [2024-12-06 09:49:37.522005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75634 ] 00:12:12.663 [2024-12-06 09:49:37.692842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.663 [2024-12-06 09:49:37.812537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.930 [2024-12-06 09:49:38.009252] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.930 [2024-12-06 09:49:38.009291] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:13.210 09:49:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:13.210 09:49:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:13.210 09:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:13.210 09:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:13.210 09:49:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.210 09:49:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.210 BaseBdev1_malloc 00:12:13.210 09:49:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.210 09:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:13.210 09:49:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.210 09:49:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.210 [2024-12-06 09:49:38.390553] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:13.210 [2024-12-06 09:49:38.390613] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.210 [2024-12-06 09:49:38.390634] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:13.210 [2024-12-06 09:49:38.390646] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.210 [2024-12-06 09:49:38.392729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.210 [2024-12-06 09:49:38.392766] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:13.210 BaseBdev1 00:12:13.210 09:49:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.210 09:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:13.211 09:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:13.211 09:49:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.211 09:49:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.211 BaseBdev2_malloc 00:12:13.211 09:49:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.211 09:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:13.211 09:49:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.211 09:49:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.211 [2024-12-06 09:49:38.445820] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:13.211 [2024-12-06 09:49:38.445878] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.211 [2024-12-06 09:49:38.445898] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:13.211 [2024-12-06 09:49:38.445910] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.211 [2024-12-06 09:49:38.447932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.211 [2024-12-06 09:49:38.447971] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:13.211 BaseBdev2 00:12:13.211 09:49:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.211 09:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:13.211 09:49:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.211 09:49:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.480 spare_malloc 00:12:13.480 09:49:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.480 09:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:13.480 09:49:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.480 09:49:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.480 spare_delay 00:12:13.480 09:49:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.480 09:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:13.480 09:49:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.480 09:49:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.480 [2024-12-06 09:49:38.532827] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:13.480 [2024-12-06 09:49:38.532885] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.480 [2024-12-06 09:49:38.532904] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:13.480 [2024-12-06 09:49:38.532915] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.480 [2024-12-06 09:49:38.535003] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.480 [2024-12-06 09:49:38.535041] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:13.480 spare 00:12:13.480 09:49:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.480 09:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:13.480 09:49:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.480 09:49:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.480 [2024-12-06 09:49:38.544879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:13.480 [2024-12-06 09:49:38.546680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:13.480 [2024-12-06 09:49:38.546862] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:13.480 [2024-12-06 09:49:38.546877] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:13.480 [2024-12-06 09:49:38.547126] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:13.480 [2024-12-06 09:49:38.547323] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:13.480 [2024-12-06 09:49:38.547343] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:13.480 [2024-12-06 09:49:38.547499] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.480 09:49:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.480 09:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:13.480 09:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:13.480 09:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:13.480 09:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.480 09:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.480 09:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:13.480 09:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.480 09:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.480 09:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.480 09:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.480 09:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.480 09:49:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.480 09:49:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.480 09:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.480 09:49:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.480 09:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.480 "name": "raid_bdev1", 00:12:13.480 "uuid": "c9e1f54d-5da9-4ab5-8cb1-542ae033e8b0", 00:12:13.480 "strip_size_kb": 0, 00:12:13.480 "state": "online", 00:12:13.480 "raid_level": "raid1", 00:12:13.480 "superblock": true, 00:12:13.480 "num_base_bdevs": 2, 00:12:13.480 "num_base_bdevs_discovered": 2, 00:12:13.480 "num_base_bdevs_operational": 2, 00:12:13.480 "base_bdevs_list": [ 00:12:13.480 { 00:12:13.480 "name": "BaseBdev1", 00:12:13.480 "uuid": "0f5893b8-61e9-57aa-97fe-be6f16bad629", 00:12:13.480 "is_configured": true, 00:12:13.480 "data_offset": 2048, 00:12:13.480 "data_size": 63488 00:12:13.480 }, 00:12:13.480 { 00:12:13.480 "name": "BaseBdev2", 00:12:13.480 "uuid": "708e12f1-bb9d-52e4-8738-d00595784a8b", 00:12:13.480 "is_configured": true, 00:12:13.480 "data_offset": 2048, 00:12:13.480 "data_size": 63488 00:12:13.480 } 00:12:13.480 ] 00:12:13.480 }' 00:12:13.480 09:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.480 09:49:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.739 09:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:13.739 09:49:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:13.739 09:49:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.739 09:49:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.739 [2024-12-06 09:49:38.996375] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:13.739 09:49:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.998 09:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:13.998 09:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:13.998 09:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.998 09:49:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.998 09:49:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.999 09:49:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.999 09:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:13.999 09:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:13.999 09:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:13.999 09:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:13.999 09:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:13.999 09:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:13.999 09:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:13.999 09:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:13.999 09:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:13.999 09:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:13.999 09:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:13.999 09:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:13.999 09:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:13.999 09:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:13.999 [2024-12-06 09:49:39.251734] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:13.999 /dev/nbd0 00:12:14.258 09:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:14.259 09:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:14.259 09:49:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:14.259 09:49:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:14.259 09:49:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:14.259 09:49:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:14.259 09:49:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:14.259 09:49:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:14.259 09:49:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:14.259 09:49:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:14.259 09:49:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:14.259 1+0 records in 00:12:14.259 1+0 records out 00:12:14.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361033 s, 11.3 MB/s 00:12:14.259 09:49:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.259 09:49:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:14.259 09:49:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.259 09:49:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:14.259 09:49:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:14.259 09:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:14.259 09:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:14.259 09:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:14.259 09:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:14.259 09:49:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:18.453 63488+0 records in 00:12:18.453 63488+0 records out 00:12:18.453 32505856 bytes (33 MB, 31 MiB) copied, 3.71214 s, 8.8 MB/s 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:18.453 [2024-12-06 09:49:43.240326] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.453 [2024-12-06 09:49:43.272363] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.453 "name": "raid_bdev1", 00:12:18.453 "uuid": "c9e1f54d-5da9-4ab5-8cb1-542ae033e8b0", 00:12:18.453 "strip_size_kb": 0, 00:12:18.453 "state": "online", 00:12:18.453 "raid_level": "raid1", 00:12:18.453 "superblock": true, 00:12:18.453 "num_base_bdevs": 2, 00:12:18.453 "num_base_bdevs_discovered": 1, 00:12:18.453 "num_base_bdevs_operational": 1, 00:12:18.453 "base_bdevs_list": [ 00:12:18.453 { 00:12:18.453 "name": null, 00:12:18.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.453 "is_configured": false, 00:12:18.453 "data_offset": 0, 00:12:18.453 "data_size": 63488 00:12:18.453 }, 00:12:18.453 { 00:12:18.453 "name": "BaseBdev2", 00:12:18.453 "uuid": "708e12f1-bb9d-52e4-8738-d00595784a8b", 00:12:18.453 "is_configured": true, 00:12:18.453 "data_offset": 2048, 00:12:18.453 "data_size": 63488 00:12:18.453 } 00:12:18.453 ] 00:12:18.453 }' 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.453 09:49:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.453 [2024-12-06 09:49:43.711647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:18.713 [2024-12-06 09:49:43.728807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:18.713 09:49:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.713 09:49:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:18.713 [2024-12-06 09:49:43.730711] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:19.654 09:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:19.654 09:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:19.654 09:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:19.654 09:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:19.654 09:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:19.654 09:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.654 09:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.654 09:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.654 09:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.654 09:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.654 09:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:19.654 "name": "raid_bdev1", 00:12:19.654 "uuid": "c9e1f54d-5da9-4ab5-8cb1-542ae033e8b0", 00:12:19.654 "strip_size_kb": 0, 00:12:19.654 "state": "online", 00:12:19.654 "raid_level": "raid1", 00:12:19.654 "superblock": true, 00:12:19.654 "num_base_bdevs": 2, 00:12:19.654 "num_base_bdevs_discovered": 2, 00:12:19.654 "num_base_bdevs_operational": 2, 00:12:19.654 "process": { 00:12:19.654 "type": "rebuild", 00:12:19.654 "target": "spare", 00:12:19.654 "progress": { 00:12:19.654 "blocks": 20480, 00:12:19.654 "percent": 32 00:12:19.654 } 00:12:19.654 }, 00:12:19.654 "base_bdevs_list": [ 00:12:19.654 { 00:12:19.654 "name": "spare", 00:12:19.654 "uuid": "fea70eae-f7ed-59a8-967f-10349ee75560", 00:12:19.654 "is_configured": true, 00:12:19.654 "data_offset": 2048, 00:12:19.654 "data_size": 63488 00:12:19.654 }, 00:12:19.654 { 00:12:19.654 "name": "BaseBdev2", 00:12:19.654 "uuid": "708e12f1-bb9d-52e4-8738-d00595784a8b", 00:12:19.654 "is_configured": true, 00:12:19.654 "data_offset": 2048, 00:12:19.654 "data_size": 63488 00:12:19.654 } 00:12:19.654 ] 00:12:19.654 }' 00:12:19.654 09:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:19.654 09:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:19.654 09:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:19.654 09:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:19.654 09:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:19.654 09:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.654 09:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.654 [2024-12-06 09:49:44.869866] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:19.914 [2024-12-06 09:49:44.935757] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:19.914 [2024-12-06 09:49:44.935821] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.914 [2024-12-06 09:49:44.935835] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:19.914 [2024-12-06 09:49:44.935848] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:19.914 09:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.914 09:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:19.914 09:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:19.914 09:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.914 09:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.914 09:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.914 09:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:19.914 09:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.914 09:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.914 09:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.914 09:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.914 09:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.914 09:49:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.914 09:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.914 09:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.914 09:49:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.914 09:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.914 "name": "raid_bdev1", 00:12:19.914 "uuid": "c9e1f54d-5da9-4ab5-8cb1-542ae033e8b0", 00:12:19.914 "strip_size_kb": 0, 00:12:19.914 "state": "online", 00:12:19.914 "raid_level": "raid1", 00:12:19.914 "superblock": true, 00:12:19.914 "num_base_bdevs": 2, 00:12:19.914 "num_base_bdevs_discovered": 1, 00:12:19.914 "num_base_bdevs_operational": 1, 00:12:19.914 "base_bdevs_list": [ 00:12:19.914 { 00:12:19.914 "name": null, 00:12:19.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.914 "is_configured": false, 00:12:19.914 "data_offset": 0, 00:12:19.914 "data_size": 63488 00:12:19.914 }, 00:12:19.914 { 00:12:19.914 "name": "BaseBdev2", 00:12:19.914 "uuid": "708e12f1-bb9d-52e4-8738-d00595784a8b", 00:12:19.914 "is_configured": true, 00:12:19.914 "data_offset": 2048, 00:12:19.914 "data_size": 63488 00:12:19.914 } 00:12:19.914 ] 00:12:19.914 }' 00:12:19.914 09:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.914 09:49:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.172 09:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:20.172 09:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:20.172 09:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:20.172 09:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:20.172 09:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:20.172 09:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.172 09:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.172 09:49:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.172 09:49:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.172 09:49:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.172 09:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:20.172 "name": "raid_bdev1", 00:12:20.172 "uuid": "c9e1f54d-5da9-4ab5-8cb1-542ae033e8b0", 00:12:20.172 "strip_size_kb": 0, 00:12:20.172 "state": "online", 00:12:20.172 "raid_level": "raid1", 00:12:20.172 "superblock": true, 00:12:20.172 "num_base_bdevs": 2, 00:12:20.172 "num_base_bdevs_discovered": 1, 00:12:20.172 "num_base_bdevs_operational": 1, 00:12:20.173 "base_bdevs_list": [ 00:12:20.173 { 00:12:20.173 "name": null, 00:12:20.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.173 "is_configured": false, 00:12:20.173 "data_offset": 0, 00:12:20.173 "data_size": 63488 00:12:20.173 }, 00:12:20.173 { 00:12:20.173 "name": "BaseBdev2", 00:12:20.173 "uuid": "708e12f1-bb9d-52e4-8738-d00595784a8b", 00:12:20.173 "is_configured": true, 00:12:20.173 "data_offset": 2048, 00:12:20.173 "data_size": 63488 00:12:20.173 } 00:12:20.173 ] 00:12:20.173 }' 00:12:20.430 09:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:20.430 09:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:20.430 09:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:20.430 09:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:20.430 09:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:20.430 09:49:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.430 09:49:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.430 [2024-12-06 09:49:45.551415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:20.430 [2024-12-06 09:49:45.567408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:20.430 09:49:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.430 09:49:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:20.430 [2024-12-06 09:49:45.569204] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:21.366 09:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:21.366 09:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:21.366 09:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:21.366 09:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:21.366 09:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:21.366 09:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.366 09:49:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.366 09:49:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.366 09:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.366 09:49:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.366 09:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:21.366 "name": "raid_bdev1", 00:12:21.366 "uuid": "c9e1f54d-5da9-4ab5-8cb1-542ae033e8b0", 00:12:21.366 "strip_size_kb": 0, 00:12:21.366 "state": "online", 00:12:21.366 "raid_level": "raid1", 00:12:21.366 "superblock": true, 00:12:21.366 "num_base_bdevs": 2, 00:12:21.366 "num_base_bdevs_discovered": 2, 00:12:21.366 "num_base_bdevs_operational": 2, 00:12:21.366 "process": { 00:12:21.366 "type": "rebuild", 00:12:21.366 "target": "spare", 00:12:21.366 "progress": { 00:12:21.366 "blocks": 20480, 00:12:21.366 "percent": 32 00:12:21.366 } 00:12:21.366 }, 00:12:21.366 "base_bdevs_list": [ 00:12:21.366 { 00:12:21.366 "name": "spare", 00:12:21.366 "uuid": "fea70eae-f7ed-59a8-967f-10349ee75560", 00:12:21.366 "is_configured": true, 00:12:21.366 "data_offset": 2048, 00:12:21.366 "data_size": 63488 00:12:21.366 }, 00:12:21.366 { 00:12:21.366 "name": "BaseBdev2", 00:12:21.366 "uuid": "708e12f1-bb9d-52e4-8738-d00595784a8b", 00:12:21.366 "is_configured": true, 00:12:21.366 "data_offset": 2048, 00:12:21.366 "data_size": 63488 00:12:21.366 } 00:12:21.366 ] 00:12:21.366 }' 00:12:21.366 09:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:21.675 09:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:21.675 09:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:21.675 09:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:21.675 09:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:21.675 09:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:21.675 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:21.675 09:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:21.675 09:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:21.675 09:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:21.675 09:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=380 00:12:21.675 09:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:21.675 09:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:21.675 09:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:21.675 09:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:21.675 09:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:21.675 09:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:21.675 09:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.675 09:49:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.675 09:49:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.675 09:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.675 09:49:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.675 09:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:21.676 "name": "raid_bdev1", 00:12:21.676 "uuid": "c9e1f54d-5da9-4ab5-8cb1-542ae033e8b0", 00:12:21.676 "strip_size_kb": 0, 00:12:21.676 "state": "online", 00:12:21.676 "raid_level": "raid1", 00:12:21.676 "superblock": true, 00:12:21.676 "num_base_bdevs": 2, 00:12:21.676 "num_base_bdevs_discovered": 2, 00:12:21.676 "num_base_bdevs_operational": 2, 00:12:21.676 "process": { 00:12:21.676 "type": "rebuild", 00:12:21.676 "target": "spare", 00:12:21.676 "progress": { 00:12:21.676 "blocks": 22528, 00:12:21.676 "percent": 35 00:12:21.676 } 00:12:21.676 }, 00:12:21.676 "base_bdevs_list": [ 00:12:21.676 { 00:12:21.676 "name": "spare", 00:12:21.676 "uuid": "fea70eae-f7ed-59a8-967f-10349ee75560", 00:12:21.676 "is_configured": true, 00:12:21.676 "data_offset": 2048, 00:12:21.676 "data_size": 63488 00:12:21.676 }, 00:12:21.676 { 00:12:21.676 "name": "BaseBdev2", 00:12:21.676 "uuid": "708e12f1-bb9d-52e4-8738-d00595784a8b", 00:12:21.676 "is_configured": true, 00:12:21.676 "data_offset": 2048, 00:12:21.676 "data_size": 63488 00:12:21.676 } 00:12:21.676 ] 00:12:21.676 }' 00:12:21.676 09:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:21.676 09:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:21.676 09:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:21.676 09:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:21.676 09:49:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:22.651 09:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:22.651 09:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:22.651 09:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:22.651 09:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:22.651 09:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:22.651 09:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:22.651 09:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.651 09:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.651 09:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.651 09:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.651 09:49:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.651 09:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:22.651 "name": "raid_bdev1", 00:12:22.651 "uuid": "c9e1f54d-5da9-4ab5-8cb1-542ae033e8b0", 00:12:22.651 "strip_size_kb": 0, 00:12:22.651 "state": "online", 00:12:22.651 "raid_level": "raid1", 00:12:22.651 "superblock": true, 00:12:22.651 "num_base_bdevs": 2, 00:12:22.651 "num_base_bdevs_discovered": 2, 00:12:22.651 "num_base_bdevs_operational": 2, 00:12:22.651 "process": { 00:12:22.651 "type": "rebuild", 00:12:22.651 "target": "spare", 00:12:22.651 "progress": { 00:12:22.651 "blocks": 45056, 00:12:22.651 "percent": 70 00:12:22.651 } 00:12:22.651 }, 00:12:22.651 "base_bdevs_list": [ 00:12:22.651 { 00:12:22.651 "name": "spare", 00:12:22.651 "uuid": "fea70eae-f7ed-59a8-967f-10349ee75560", 00:12:22.651 "is_configured": true, 00:12:22.651 "data_offset": 2048, 00:12:22.651 "data_size": 63488 00:12:22.651 }, 00:12:22.651 { 00:12:22.651 "name": "BaseBdev2", 00:12:22.651 "uuid": "708e12f1-bb9d-52e4-8738-d00595784a8b", 00:12:22.651 "is_configured": true, 00:12:22.651 "data_offset": 2048, 00:12:22.651 "data_size": 63488 00:12:22.651 } 00:12:22.651 ] 00:12:22.651 }' 00:12:22.651 09:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:22.911 09:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:22.911 09:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:22.911 09:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:22.911 09:49:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:23.480 [2024-12-06 09:49:48.682162] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:23.480 [2024-12-06 09:49:48.682265] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:23.481 [2024-12-06 09:49:48.682381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.740 09:49:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:23.740 09:49:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:23.740 09:49:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:23.740 09:49:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:23.740 09:49:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:23.740 09:49:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:23.740 09:49:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.740 09:49:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.740 09:49:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.740 09:49:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.740 09:49:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.001 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:24.001 "name": "raid_bdev1", 00:12:24.001 "uuid": "c9e1f54d-5da9-4ab5-8cb1-542ae033e8b0", 00:12:24.001 "strip_size_kb": 0, 00:12:24.001 "state": "online", 00:12:24.001 "raid_level": "raid1", 00:12:24.001 "superblock": true, 00:12:24.001 "num_base_bdevs": 2, 00:12:24.001 "num_base_bdevs_discovered": 2, 00:12:24.001 "num_base_bdevs_operational": 2, 00:12:24.001 "base_bdevs_list": [ 00:12:24.001 { 00:12:24.001 "name": "spare", 00:12:24.001 "uuid": "fea70eae-f7ed-59a8-967f-10349ee75560", 00:12:24.001 "is_configured": true, 00:12:24.001 "data_offset": 2048, 00:12:24.001 "data_size": 63488 00:12:24.001 }, 00:12:24.001 { 00:12:24.001 "name": "BaseBdev2", 00:12:24.001 "uuid": "708e12f1-bb9d-52e4-8738-d00595784a8b", 00:12:24.001 "is_configured": true, 00:12:24.001 "data_offset": 2048, 00:12:24.001 "data_size": 63488 00:12:24.001 } 00:12:24.001 ] 00:12:24.001 }' 00:12:24.001 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:24.001 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:24.001 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:24.001 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:24.001 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:24.001 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:24.001 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:24.001 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:24.001 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:24.001 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:24.001 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.001 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.001 09:49:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.001 09:49:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.001 09:49:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.001 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:24.001 "name": "raid_bdev1", 00:12:24.001 "uuid": "c9e1f54d-5da9-4ab5-8cb1-542ae033e8b0", 00:12:24.001 "strip_size_kb": 0, 00:12:24.001 "state": "online", 00:12:24.001 "raid_level": "raid1", 00:12:24.001 "superblock": true, 00:12:24.001 "num_base_bdevs": 2, 00:12:24.001 "num_base_bdevs_discovered": 2, 00:12:24.001 "num_base_bdevs_operational": 2, 00:12:24.001 "base_bdevs_list": [ 00:12:24.001 { 00:12:24.001 "name": "spare", 00:12:24.001 "uuid": "fea70eae-f7ed-59a8-967f-10349ee75560", 00:12:24.001 "is_configured": true, 00:12:24.001 "data_offset": 2048, 00:12:24.001 "data_size": 63488 00:12:24.001 }, 00:12:24.001 { 00:12:24.001 "name": "BaseBdev2", 00:12:24.001 "uuid": "708e12f1-bb9d-52e4-8738-d00595784a8b", 00:12:24.001 "is_configured": true, 00:12:24.001 "data_offset": 2048, 00:12:24.001 "data_size": 63488 00:12:24.001 } 00:12:24.001 ] 00:12:24.001 }' 00:12:24.001 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:24.001 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:24.001 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:24.001 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:24.001 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:24.001 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.001 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.001 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.001 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.001 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:24.001 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.001 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.001 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.001 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.001 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.001 09:49:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.001 09:49:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.001 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.260 09:49:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.260 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.260 "name": "raid_bdev1", 00:12:24.260 "uuid": "c9e1f54d-5da9-4ab5-8cb1-542ae033e8b0", 00:12:24.260 "strip_size_kb": 0, 00:12:24.260 "state": "online", 00:12:24.260 "raid_level": "raid1", 00:12:24.260 "superblock": true, 00:12:24.260 "num_base_bdevs": 2, 00:12:24.260 "num_base_bdevs_discovered": 2, 00:12:24.260 "num_base_bdevs_operational": 2, 00:12:24.260 "base_bdevs_list": [ 00:12:24.260 { 00:12:24.260 "name": "spare", 00:12:24.260 "uuid": "fea70eae-f7ed-59a8-967f-10349ee75560", 00:12:24.260 "is_configured": true, 00:12:24.260 "data_offset": 2048, 00:12:24.260 "data_size": 63488 00:12:24.260 }, 00:12:24.260 { 00:12:24.260 "name": "BaseBdev2", 00:12:24.261 "uuid": "708e12f1-bb9d-52e4-8738-d00595784a8b", 00:12:24.261 "is_configured": true, 00:12:24.261 "data_offset": 2048, 00:12:24.261 "data_size": 63488 00:12:24.261 } 00:12:24.261 ] 00:12:24.261 }' 00:12:24.261 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.261 09:49:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.520 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:24.520 09:49:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.520 09:49:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.520 [2024-12-06 09:49:49.659791] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:24.520 [2024-12-06 09:49:49.659828] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:24.520 [2024-12-06 09:49:49.659910] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:24.520 [2024-12-06 09:49:49.659994] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:24.520 [2024-12-06 09:49:49.660009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:24.520 09:49:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.520 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.520 09:49:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.520 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:24.520 09:49:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.520 09:49:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.520 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:24.520 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:24.520 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:24.520 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:24.520 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:24.520 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:24.520 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:24.520 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:24.520 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:24.520 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:24.520 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:24.520 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:24.520 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:24.780 /dev/nbd0 00:12:24.780 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:24.780 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:24.780 09:49:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:24.780 09:49:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:24.780 09:49:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:24.780 09:49:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:24.780 09:49:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:24.780 09:49:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:24.780 09:49:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:24.780 09:49:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:24.780 09:49:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:24.780 1+0 records in 00:12:24.780 1+0 records out 00:12:24.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026925 s, 15.2 MB/s 00:12:24.780 09:49:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.780 09:49:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:24.780 09:49:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:24.780 09:49:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:24.780 09:49:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:24.780 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:24.780 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:24.780 09:49:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:25.040 /dev/nbd1 00:12:25.040 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:25.040 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:25.040 09:49:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:25.040 09:49:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:25.040 09:49:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:25.040 09:49:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:25.040 09:49:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:25.040 09:49:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:25.040 09:49:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:25.040 09:49:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:25.040 09:49:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:25.040 1+0 records in 00:12:25.040 1+0 records out 00:12:25.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262124 s, 15.6 MB/s 00:12:25.040 09:49:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.040 09:49:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:25.040 09:49:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.040 09:49:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:25.040 09:49:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:25.040 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:25.040 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:25.040 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:25.299 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:25.299 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:25.299 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:25.299 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:25.299 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:25.299 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:25.299 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:25.559 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:25.559 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:25.559 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:25.559 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:25.559 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:25.559 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:25.559 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:25.559 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:25.559 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:25.559 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:25.559 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:25.820 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:25.820 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:25.820 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:25.820 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:25.820 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:25.820 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:25.820 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:25.820 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:25.820 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:25.820 09:49:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.820 09:49:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.820 09:49:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.820 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:25.820 09:49:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.820 09:49:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.820 [2024-12-06 09:49:50.857063] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:25.820 [2024-12-06 09:49:50.857123] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.820 [2024-12-06 09:49:50.857159] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:25.820 [2024-12-06 09:49:50.857169] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.820 [2024-12-06 09:49:50.859463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.820 [2024-12-06 09:49:50.859495] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:25.820 [2024-12-06 09:49:50.859584] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:25.820 [2024-12-06 09:49:50.859633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:25.820 [2024-12-06 09:49:50.859755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:25.820 spare 00:12:25.820 09:49:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.820 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:25.820 09:49:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.820 09:49:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.820 [2024-12-06 09:49:50.959653] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:25.820 [2024-12-06 09:49:50.959687] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:25.820 [2024-12-06 09:49:50.959981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:12:25.820 [2024-12-06 09:49:50.960208] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:25.820 [2024-12-06 09:49:50.960230] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:25.820 [2024-12-06 09:49:50.960409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.820 09:49:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.820 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:25.820 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.820 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.820 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.820 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.820 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:25.820 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.820 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.820 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.820 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.820 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.820 09:49:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.820 09:49:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.820 09:49:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.820 09:49:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.820 09:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.820 "name": "raid_bdev1", 00:12:25.820 "uuid": "c9e1f54d-5da9-4ab5-8cb1-542ae033e8b0", 00:12:25.820 "strip_size_kb": 0, 00:12:25.820 "state": "online", 00:12:25.820 "raid_level": "raid1", 00:12:25.820 "superblock": true, 00:12:25.820 "num_base_bdevs": 2, 00:12:25.820 "num_base_bdevs_discovered": 2, 00:12:25.820 "num_base_bdevs_operational": 2, 00:12:25.820 "base_bdevs_list": [ 00:12:25.820 { 00:12:25.820 "name": "spare", 00:12:25.820 "uuid": "fea70eae-f7ed-59a8-967f-10349ee75560", 00:12:25.820 "is_configured": true, 00:12:25.820 "data_offset": 2048, 00:12:25.820 "data_size": 63488 00:12:25.820 }, 00:12:25.820 { 00:12:25.820 "name": "BaseBdev2", 00:12:25.820 "uuid": "708e12f1-bb9d-52e4-8738-d00595784a8b", 00:12:25.820 "is_configured": true, 00:12:25.820 "data_offset": 2048, 00:12:25.820 "data_size": 63488 00:12:25.820 } 00:12:25.820 ] 00:12:25.820 }' 00:12:25.820 09:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.820 09:49:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.407 09:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:26.407 09:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.407 09:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:26.407 09:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:26.407 09:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.407 09:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.407 09:49:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.407 09:49:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.407 09:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.407 09:49:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.407 09:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:26.407 "name": "raid_bdev1", 00:12:26.407 "uuid": "c9e1f54d-5da9-4ab5-8cb1-542ae033e8b0", 00:12:26.407 "strip_size_kb": 0, 00:12:26.407 "state": "online", 00:12:26.407 "raid_level": "raid1", 00:12:26.407 "superblock": true, 00:12:26.407 "num_base_bdevs": 2, 00:12:26.407 "num_base_bdevs_discovered": 2, 00:12:26.407 "num_base_bdevs_operational": 2, 00:12:26.407 "base_bdevs_list": [ 00:12:26.407 { 00:12:26.407 "name": "spare", 00:12:26.407 "uuid": "fea70eae-f7ed-59a8-967f-10349ee75560", 00:12:26.407 "is_configured": true, 00:12:26.407 "data_offset": 2048, 00:12:26.407 "data_size": 63488 00:12:26.407 }, 00:12:26.407 { 00:12:26.407 "name": "BaseBdev2", 00:12:26.407 "uuid": "708e12f1-bb9d-52e4-8738-d00595784a8b", 00:12:26.407 "is_configured": true, 00:12:26.407 "data_offset": 2048, 00:12:26.407 "data_size": 63488 00:12:26.407 } 00:12:26.407 ] 00:12:26.407 }' 00:12:26.407 09:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:26.407 09:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:26.407 09:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:26.407 09:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:26.407 09:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.407 09:49:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.407 09:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:26.407 09:49:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.407 09:49:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.407 09:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:26.407 09:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:26.407 09:49:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.407 09:49:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.407 [2024-12-06 09:49:51.571913] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:26.407 09:49:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.407 09:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:26.407 09:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.407 09:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.407 09:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.407 09:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.407 09:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:26.408 09:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.408 09:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.408 09:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.408 09:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.408 09:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.408 09:49:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.408 09:49:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.408 09:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.408 09:49:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.408 09:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.408 "name": "raid_bdev1", 00:12:26.408 "uuid": "c9e1f54d-5da9-4ab5-8cb1-542ae033e8b0", 00:12:26.408 "strip_size_kb": 0, 00:12:26.408 "state": "online", 00:12:26.408 "raid_level": "raid1", 00:12:26.408 "superblock": true, 00:12:26.408 "num_base_bdevs": 2, 00:12:26.408 "num_base_bdevs_discovered": 1, 00:12:26.408 "num_base_bdevs_operational": 1, 00:12:26.408 "base_bdevs_list": [ 00:12:26.408 { 00:12:26.408 "name": null, 00:12:26.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.408 "is_configured": false, 00:12:26.408 "data_offset": 0, 00:12:26.408 "data_size": 63488 00:12:26.408 }, 00:12:26.408 { 00:12:26.408 "name": "BaseBdev2", 00:12:26.408 "uuid": "708e12f1-bb9d-52e4-8738-d00595784a8b", 00:12:26.408 "is_configured": true, 00:12:26.408 "data_offset": 2048, 00:12:26.408 "data_size": 63488 00:12:26.408 } 00:12:26.408 ] 00:12:26.408 }' 00:12:26.408 09:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.408 09:49:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.974 09:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:26.975 09:49:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.975 09:49:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.975 [2024-12-06 09:49:51.983269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:26.975 [2024-12-06 09:49:51.983464] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:26.975 [2024-12-06 09:49:51.983486] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:26.975 [2024-12-06 09:49:51.983521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:26.975 [2024-12-06 09:49:51.999728] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:12:26.975 09:49:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.975 09:49:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:26.975 [2024-12-06 09:49:52.001618] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:27.911 09:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:27.911 09:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.911 09:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:27.911 09:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:27.911 09:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.911 09:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.911 09:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.911 09:49:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.911 09:49:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.911 09:49:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.911 09:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.911 "name": "raid_bdev1", 00:12:27.911 "uuid": "c9e1f54d-5da9-4ab5-8cb1-542ae033e8b0", 00:12:27.911 "strip_size_kb": 0, 00:12:27.911 "state": "online", 00:12:27.911 "raid_level": "raid1", 00:12:27.911 "superblock": true, 00:12:27.911 "num_base_bdevs": 2, 00:12:27.911 "num_base_bdevs_discovered": 2, 00:12:27.911 "num_base_bdevs_operational": 2, 00:12:27.911 "process": { 00:12:27.911 "type": "rebuild", 00:12:27.911 "target": "spare", 00:12:27.911 "progress": { 00:12:27.911 "blocks": 20480, 00:12:27.911 "percent": 32 00:12:27.911 } 00:12:27.911 }, 00:12:27.911 "base_bdevs_list": [ 00:12:27.911 { 00:12:27.911 "name": "spare", 00:12:27.911 "uuid": "fea70eae-f7ed-59a8-967f-10349ee75560", 00:12:27.911 "is_configured": true, 00:12:27.911 "data_offset": 2048, 00:12:27.911 "data_size": 63488 00:12:27.911 }, 00:12:27.911 { 00:12:27.911 "name": "BaseBdev2", 00:12:27.911 "uuid": "708e12f1-bb9d-52e4-8738-d00595784a8b", 00:12:27.911 "is_configured": true, 00:12:27.911 "data_offset": 2048, 00:12:27.911 "data_size": 63488 00:12:27.911 } 00:12:27.911 ] 00:12:27.911 }' 00:12:27.911 09:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.911 09:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:27.911 09:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.911 09:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:27.911 09:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:27.911 09:49:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.911 09:49:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.911 [2024-12-06 09:49:53.149194] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:28.169 [2024-12-06 09:49:53.206678] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:28.169 [2024-12-06 09:49:53.206741] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.169 [2024-12-06 09:49:53.206755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:28.169 [2024-12-06 09:49:53.206765] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:28.169 09:49:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.169 09:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:28.169 09:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.169 09:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.169 09:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.169 09:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.169 09:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:28.169 09:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.169 09:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.169 09:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.169 09:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.169 09:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.170 09:49:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.170 09:49:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.170 09:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.170 09:49:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.170 09:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.170 "name": "raid_bdev1", 00:12:28.170 "uuid": "c9e1f54d-5da9-4ab5-8cb1-542ae033e8b0", 00:12:28.170 "strip_size_kb": 0, 00:12:28.170 "state": "online", 00:12:28.170 "raid_level": "raid1", 00:12:28.170 "superblock": true, 00:12:28.170 "num_base_bdevs": 2, 00:12:28.170 "num_base_bdevs_discovered": 1, 00:12:28.170 "num_base_bdevs_operational": 1, 00:12:28.170 "base_bdevs_list": [ 00:12:28.170 { 00:12:28.170 "name": null, 00:12:28.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.170 "is_configured": false, 00:12:28.170 "data_offset": 0, 00:12:28.170 "data_size": 63488 00:12:28.170 }, 00:12:28.170 { 00:12:28.170 "name": "BaseBdev2", 00:12:28.170 "uuid": "708e12f1-bb9d-52e4-8738-d00595784a8b", 00:12:28.170 "is_configured": true, 00:12:28.170 "data_offset": 2048, 00:12:28.170 "data_size": 63488 00:12:28.170 } 00:12:28.170 ] 00:12:28.170 }' 00:12:28.170 09:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.170 09:49:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.450 09:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:28.450 09:49:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.450 09:49:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.450 [2024-12-06 09:49:53.652878] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:28.450 [2024-12-06 09:49:53.652943] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.450 [2024-12-06 09:49:53.652966] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:28.450 [2024-12-06 09:49:53.652978] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.450 [2024-12-06 09:49:53.653482] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.450 [2024-12-06 09:49:53.653519] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:28.450 [2024-12-06 09:49:53.653623] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:28.450 [2024-12-06 09:49:53.653645] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:28.450 [2024-12-06 09:49:53.653656] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:28.450 [2024-12-06 09:49:53.653683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:28.450 [2024-12-06 09:49:53.669756] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:12:28.450 spare 00:12:28.450 09:49:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.450 09:49:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:28.450 [2024-12-06 09:49:53.671649] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.828 "name": "raid_bdev1", 00:12:29.828 "uuid": "c9e1f54d-5da9-4ab5-8cb1-542ae033e8b0", 00:12:29.828 "strip_size_kb": 0, 00:12:29.828 "state": "online", 00:12:29.828 "raid_level": "raid1", 00:12:29.828 "superblock": true, 00:12:29.828 "num_base_bdevs": 2, 00:12:29.828 "num_base_bdevs_discovered": 2, 00:12:29.828 "num_base_bdevs_operational": 2, 00:12:29.828 "process": { 00:12:29.828 "type": "rebuild", 00:12:29.828 "target": "spare", 00:12:29.828 "progress": { 00:12:29.828 "blocks": 20480, 00:12:29.828 "percent": 32 00:12:29.828 } 00:12:29.828 }, 00:12:29.828 "base_bdevs_list": [ 00:12:29.828 { 00:12:29.828 "name": "spare", 00:12:29.828 "uuid": "fea70eae-f7ed-59a8-967f-10349ee75560", 00:12:29.828 "is_configured": true, 00:12:29.828 "data_offset": 2048, 00:12:29.828 "data_size": 63488 00:12:29.828 }, 00:12:29.828 { 00:12:29.828 "name": "BaseBdev2", 00:12:29.828 "uuid": "708e12f1-bb9d-52e4-8738-d00595784a8b", 00:12:29.828 "is_configured": true, 00:12:29.828 "data_offset": 2048, 00:12:29.828 "data_size": 63488 00:12:29.828 } 00:12:29.828 ] 00:12:29.828 }' 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.828 [2024-12-06 09:49:54.807114] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:29.828 [2024-12-06 09:49:54.876542] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:29.828 [2024-12-06 09:49:54.876594] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.828 [2024-12-06 09:49:54.876610] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:29.828 [2024-12-06 09:49:54.876617] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.828 "name": "raid_bdev1", 00:12:29.828 "uuid": "c9e1f54d-5da9-4ab5-8cb1-542ae033e8b0", 00:12:29.828 "strip_size_kb": 0, 00:12:29.828 "state": "online", 00:12:29.828 "raid_level": "raid1", 00:12:29.828 "superblock": true, 00:12:29.828 "num_base_bdevs": 2, 00:12:29.828 "num_base_bdevs_discovered": 1, 00:12:29.828 "num_base_bdevs_operational": 1, 00:12:29.828 "base_bdevs_list": [ 00:12:29.828 { 00:12:29.828 "name": null, 00:12:29.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.828 "is_configured": false, 00:12:29.828 "data_offset": 0, 00:12:29.828 "data_size": 63488 00:12:29.828 }, 00:12:29.828 { 00:12:29.828 "name": "BaseBdev2", 00:12:29.828 "uuid": "708e12f1-bb9d-52e4-8738-d00595784a8b", 00:12:29.828 "is_configured": true, 00:12:29.828 "data_offset": 2048, 00:12:29.828 "data_size": 63488 00:12:29.828 } 00:12:29.828 ] 00:12:29.828 }' 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.828 09:49:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.396 09:49:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:30.396 09:49:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.396 09:49:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:30.396 09:49:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:30.396 09:49:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.396 09:49:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.396 09:49:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.396 09:49:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.396 09:49:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.396 09:49:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.396 09:49:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.396 "name": "raid_bdev1", 00:12:30.396 "uuid": "c9e1f54d-5da9-4ab5-8cb1-542ae033e8b0", 00:12:30.396 "strip_size_kb": 0, 00:12:30.396 "state": "online", 00:12:30.396 "raid_level": "raid1", 00:12:30.396 "superblock": true, 00:12:30.396 "num_base_bdevs": 2, 00:12:30.396 "num_base_bdevs_discovered": 1, 00:12:30.396 "num_base_bdevs_operational": 1, 00:12:30.396 "base_bdevs_list": [ 00:12:30.396 { 00:12:30.396 "name": null, 00:12:30.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.396 "is_configured": false, 00:12:30.396 "data_offset": 0, 00:12:30.396 "data_size": 63488 00:12:30.396 }, 00:12:30.396 { 00:12:30.396 "name": "BaseBdev2", 00:12:30.396 "uuid": "708e12f1-bb9d-52e4-8738-d00595784a8b", 00:12:30.396 "is_configured": true, 00:12:30.396 "data_offset": 2048, 00:12:30.396 "data_size": 63488 00:12:30.396 } 00:12:30.396 ] 00:12:30.396 }' 00:12:30.396 09:49:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.396 09:49:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:30.396 09:49:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.396 09:49:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:30.396 09:49:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:30.396 09:49:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.396 09:49:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.396 09:49:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.396 09:49:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:30.396 09:49:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.396 09:49:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.396 [2024-12-06 09:49:55.537133] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:30.396 [2024-12-06 09:49:55.537250] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.396 [2024-12-06 09:49:55.537285] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:30.396 [2024-12-06 09:49:55.537303] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.396 [2024-12-06 09:49:55.537778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.396 [2024-12-06 09:49:55.537798] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:30.396 [2024-12-06 09:49:55.537884] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:30.396 [2024-12-06 09:49:55.537897] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:30.396 [2024-12-06 09:49:55.537907] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:30.396 [2024-12-06 09:49:55.537917] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:30.396 BaseBdev1 00:12:30.396 09:49:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.396 09:49:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:31.331 09:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:31.331 09:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.331 09:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.331 09:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.331 09:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.331 09:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:31.331 09:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.331 09:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.331 09:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.331 09:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.331 09:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.331 09:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.331 09:49:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.331 09:49:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.331 09:49:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.591 09:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.591 "name": "raid_bdev1", 00:12:31.591 "uuid": "c9e1f54d-5da9-4ab5-8cb1-542ae033e8b0", 00:12:31.591 "strip_size_kb": 0, 00:12:31.591 "state": "online", 00:12:31.591 "raid_level": "raid1", 00:12:31.591 "superblock": true, 00:12:31.591 "num_base_bdevs": 2, 00:12:31.591 "num_base_bdevs_discovered": 1, 00:12:31.591 "num_base_bdevs_operational": 1, 00:12:31.591 "base_bdevs_list": [ 00:12:31.591 { 00:12:31.591 "name": null, 00:12:31.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.591 "is_configured": false, 00:12:31.591 "data_offset": 0, 00:12:31.591 "data_size": 63488 00:12:31.591 }, 00:12:31.591 { 00:12:31.591 "name": "BaseBdev2", 00:12:31.591 "uuid": "708e12f1-bb9d-52e4-8738-d00595784a8b", 00:12:31.591 "is_configured": true, 00:12:31.591 "data_offset": 2048, 00:12:31.591 "data_size": 63488 00:12:31.591 } 00:12:31.591 ] 00:12:31.591 }' 00:12:31.591 09:49:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.591 09:49:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.851 09:49:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:31.851 09:49:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.851 09:49:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:31.851 09:49:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:31.851 09:49:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.851 09:49:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.851 09:49:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.851 09:49:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.851 09:49:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.851 09:49:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.851 09:49:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.851 "name": "raid_bdev1", 00:12:31.851 "uuid": "c9e1f54d-5da9-4ab5-8cb1-542ae033e8b0", 00:12:31.851 "strip_size_kb": 0, 00:12:31.851 "state": "online", 00:12:31.851 "raid_level": "raid1", 00:12:31.851 "superblock": true, 00:12:31.851 "num_base_bdevs": 2, 00:12:31.851 "num_base_bdevs_discovered": 1, 00:12:31.851 "num_base_bdevs_operational": 1, 00:12:31.851 "base_bdevs_list": [ 00:12:31.851 { 00:12:31.851 "name": null, 00:12:31.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.851 "is_configured": false, 00:12:31.851 "data_offset": 0, 00:12:31.851 "data_size": 63488 00:12:31.851 }, 00:12:31.851 { 00:12:31.851 "name": "BaseBdev2", 00:12:31.851 "uuid": "708e12f1-bb9d-52e4-8738-d00595784a8b", 00:12:31.851 "is_configured": true, 00:12:31.851 "data_offset": 2048, 00:12:31.851 "data_size": 63488 00:12:31.851 } 00:12:31.851 ] 00:12:31.851 }' 00:12:31.851 09:49:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.851 09:49:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:31.851 09:49:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:32.111 09:49:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:32.111 09:49:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:32.111 09:49:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:12:32.111 09:49:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:32.111 09:49:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:32.111 09:49:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:32.111 09:49:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:32.111 09:49:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:32.111 09:49:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:32.111 09:49:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.111 09:49:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.111 [2024-12-06 09:49:57.170414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:32.111 [2024-12-06 09:49:57.170667] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:32.111 [2024-12-06 09:49:57.170740] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:32.111 request: 00:12:32.111 { 00:12:32.111 "base_bdev": "BaseBdev1", 00:12:32.111 "raid_bdev": "raid_bdev1", 00:12:32.111 "method": "bdev_raid_add_base_bdev", 00:12:32.111 "req_id": 1 00:12:32.111 } 00:12:32.111 Got JSON-RPC error response 00:12:32.111 response: 00:12:32.111 { 00:12:32.111 "code": -22, 00:12:32.111 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:32.111 } 00:12:32.111 09:49:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:32.111 09:49:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:12:32.111 09:49:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:32.111 09:49:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:32.111 09:49:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:32.111 09:49:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:33.051 09:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:33.051 09:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.051 09:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.051 09:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.051 09:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.051 09:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:33.051 09:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.051 09:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.051 09:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.051 09:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.051 09:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.051 09:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.051 09:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.051 09:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.051 09:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.051 09:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.051 "name": "raid_bdev1", 00:12:33.051 "uuid": "c9e1f54d-5da9-4ab5-8cb1-542ae033e8b0", 00:12:33.051 "strip_size_kb": 0, 00:12:33.051 "state": "online", 00:12:33.051 "raid_level": "raid1", 00:12:33.051 "superblock": true, 00:12:33.051 "num_base_bdevs": 2, 00:12:33.051 "num_base_bdevs_discovered": 1, 00:12:33.051 "num_base_bdevs_operational": 1, 00:12:33.051 "base_bdevs_list": [ 00:12:33.051 { 00:12:33.051 "name": null, 00:12:33.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.051 "is_configured": false, 00:12:33.051 "data_offset": 0, 00:12:33.051 "data_size": 63488 00:12:33.051 }, 00:12:33.051 { 00:12:33.051 "name": "BaseBdev2", 00:12:33.051 "uuid": "708e12f1-bb9d-52e4-8738-d00595784a8b", 00:12:33.051 "is_configured": true, 00:12:33.051 "data_offset": 2048, 00:12:33.051 "data_size": 63488 00:12:33.051 } 00:12:33.051 ] 00:12:33.051 }' 00:12:33.051 09:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.051 09:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.621 09:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:33.621 09:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:33.621 09:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:33.621 09:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:33.621 09:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:33.621 09:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.621 09:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.621 09:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.621 09:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.621 09:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.621 09:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:33.621 "name": "raid_bdev1", 00:12:33.621 "uuid": "c9e1f54d-5da9-4ab5-8cb1-542ae033e8b0", 00:12:33.621 "strip_size_kb": 0, 00:12:33.621 "state": "online", 00:12:33.621 "raid_level": "raid1", 00:12:33.621 "superblock": true, 00:12:33.621 "num_base_bdevs": 2, 00:12:33.621 "num_base_bdevs_discovered": 1, 00:12:33.621 "num_base_bdevs_operational": 1, 00:12:33.621 "base_bdevs_list": [ 00:12:33.621 { 00:12:33.621 "name": null, 00:12:33.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.621 "is_configured": false, 00:12:33.621 "data_offset": 0, 00:12:33.621 "data_size": 63488 00:12:33.621 }, 00:12:33.621 { 00:12:33.621 "name": "BaseBdev2", 00:12:33.621 "uuid": "708e12f1-bb9d-52e4-8738-d00595784a8b", 00:12:33.621 "is_configured": true, 00:12:33.621 "data_offset": 2048, 00:12:33.621 "data_size": 63488 00:12:33.621 } 00:12:33.621 ] 00:12:33.621 }' 00:12:33.621 09:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:33.621 09:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:33.621 09:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:33.621 09:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:33.621 09:49:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75634 00:12:33.621 09:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75634 ']' 00:12:33.621 09:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75634 00:12:33.621 09:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:33.621 09:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:33.621 09:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75634 00:12:33.621 09:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:33.621 09:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:33.621 09:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75634' 00:12:33.621 killing process with pid 75634 00:12:33.621 09:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75634 00:12:33.621 Received shutdown signal, test time was about 60.000000 seconds 00:12:33.621 00:12:33.621 Latency(us) 00:12:33.621 [2024-12-06T09:49:58.894Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:33.621 [2024-12-06T09:49:58.894Z] =================================================================================================================== 00:12:33.621 [2024-12-06T09:49:58.894Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:33.621 [2024-12-06 09:49:58.819513] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:33.621 [2024-12-06 09:49:58.819651] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:33.621 [2024-12-06 09:49:58.819703] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:33.621 [2024-12-06 09:49:58.819714] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:12:33.621 09:49:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75634 00:12:33.880 [2024-12-06 09:49:59.108381] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:35.259 00:12:35.259 real 0m22.773s 00:12:35.259 user 0m28.018s 00:12:35.259 sys 0m3.383s 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.259 ************************************ 00:12:35.259 END TEST raid_rebuild_test_sb 00:12:35.259 ************************************ 00:12:35.259 09:50:00 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:12:35.259 09:50:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:35.259 09:50:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:35.259 09:50:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:35.259 ************************************ 00:12:35.259 START TEST raid_rebuild_test_io 00:12:35.259 ************************************ 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76355 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76355 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76355 ']' 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:35.259 09:50:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.259 [2024-12-06 09:50:00.375491] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:12:35.259 [2024-12-06 09:50:00.375671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:12:35.260 Zero copy mechanism will not be used. 00:12:35.260 -allocations --file-prefix=spdk_pid76355 ] 00:12:35.519 [2024-12-06 09:50:00.552036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.519 [2024-12-06 09:50:00.670450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.779 [2024-12-06 09:50:00.866034] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:35.779 [2024-12-06 09:50:00.866203] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:36.045 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:36.045 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:12:36.045 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:36.045 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:36.045 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.045 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.045 BaseBdev1_malloc 00:12:36.045 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.045 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:36.045 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.045 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.045 [2024-12-06 09:50:01.246647] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:36.045 [2024-12-06 09:50:01.246716] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.045 [2024-12-06 09:50:01.246756] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:36.045 [2024-12-06 09:50:01.246768] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.045 [2024-12-06 09:50:01.248858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.045 [2024-12-06 09:50:01.248902] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:36.045 BaseBdev1 00:12:36.045 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.045 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:36.045 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:36.045 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.045 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.045 BaseBdev2_malloc 00:12:36.045 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.045 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:36.045 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.045 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.045 [2024-12-06 09:50:01.300337] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:36.045 [2024-12-06 09:50:01.300412] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.045 [2024-12-06 09:50:01.300435] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:36.045 [2024-12-06 09:50:01.300446] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.045 [2024-12-06 09:50:01.302436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.045 [2024-12-06 09:50:01.302474] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:36.045 BaseBdev2 00:12:36.045 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.045 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:36.045 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.045 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.305 spare_malloc 00:12:36.305 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.305 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:36.305 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.305 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.305 spare_delay 00:12:36.305 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.305 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:36.305 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.305 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.305 [2024-12-06 09:50:01.377191] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:36.305 [2024-12-06 09:50:01.377307] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.305 [2024-12-06 09:50:01.377346] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:36.305 [2024-12-06 09:50:01.377380] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.305 [2024-12-06 09:50:01.379429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.305 [2024-12-06 09:50:01.379519] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:36.305 spare 00:12:36.305 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.305 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:36.305 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.305 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.305 [2024-12-06 09:50:01.389219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:36.305 [2024-12-06 09:50:01.391104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:36.305 [2024-12-06 09:50:01.391271] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:36.305 [2024-12-06 09:50:01.391324] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:36.305 [2024-12-06 09:50:01.391609] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:36.305 [2024-12-06 09:50:01.391844] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:36.305 [2024-12-06 09:50:01.391895] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:36.305 [2024-12-06 09:50:01.392113] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.305 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.305 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:36.305 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.305 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.305 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.305 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.305 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:36.305 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.305 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.305 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.305 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.305 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.305 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.305 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.305 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.305 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.305 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.305 "name": "raid_bdev1", 00:12:36.305 "uuid": "059ccd74-3deb-4e9e-8ee6-fee93823052c", 00:12:36.305 "strip_size_kb": 0, 00:12:36.305 "state": "online", 00:12:36.305 "raid_level": "raid1", 00:12:36.305 "superblock": false, 00:12:36.305 "num_base_bdevs": 2, 00:12:36.305 "num_base_bdevs_discovered": 2, 00:12:36.305 "num_base_bdevs_operational": 2, 00:12:36.305 "base_bdevs_list": [ 00:12:36.305 { 00:12:36.305 "name": "BaseBdev1", 00:12:36.305 "uuid": "befc078b-acd5-5bac-96f3-487314b995fe", 00:12:36.305 "is_configured": true, 00:12:36.305 "data_offset": 0, 00:12:36.305 "data_size": 65536 00:12:36.305 }, 00:12:36.305 { 00:12:36.305 "name": "BaseBdev2", 00:12:36.305 "uuid": "29fe5c31-f86a-53c6-b7fc-6e0c4fc5c24d", 00:12:36.305 "is_configured": true, 00:12:36.305 "data_offset": 0, 00:12:36.305 "data_size": 65536 00:12:36.305 } 00:12:36.305 ] 00:12:36.305 }' 00:12:36.305 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.305 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.566 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:36.566 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.566 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.566 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:36.566 [2024-12-06 09:50:01.808788] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:36.566 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.826 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:36.826 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.826 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.826 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.826 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:36.826 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.826 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:36.826 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:36.826 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:36.826 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:36.826 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.826 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.826 [2024-12-06 09:50:01.920286] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:36.826 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.826 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:36.826 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.826 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.826 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.826 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.826 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:36.826 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.826 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.826 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.826 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.827 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.827 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.827 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.827 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.827 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.827 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.827 "name": "raid_bdev1", 00:12:36.827 "uuid": "059ccd74-3deb-4e9e-8ee6-fee93823052c", 00:12:36.827 "strip_size_kb": 0, 00:12:36.827 "state": "online", 00:12:36.827 "raid_level": "raid1", 00:12:36.827 "superblock": false, 00:12:36.827 "num_base_bdevs": 2, 00:12:36.827 "num_base_bdevs_discovered": 1, 00:12:36.827 "num_base_bdevs_operational": 1, 00:12:36.827 "base_bdevs_list": [ 00:12:36.827 { 00:12:36.827 "name": null, 00:12:36.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.827 "is_configured": false, 00:12:36.827 "data_offset": 0, 00:12:36.827 "data_size": 65536 00:12:36.827 }, 00:12:36.827 { 00:12:36.827 "name": "BaseBdev2", 00:12:36.827 "uuid": "29fe5c31-f86a-53c6-b7fc-6e0c4fc5c24d", 00:12:36.827 "is_configured": true, 00:12:36.827 "data_offset": 0, 00:12:36.827 "data_size": 65536 00:12:36.827 } 00:12:36.827 ] 00:12:36.827 }' 00:12:36.827 09:50:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.827 09:50:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.827 [2024-12-06 09:50:02.012334] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:36.827 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:36.827 Zero copy mechanism will not be used. 00:12:36.827 Running I/O for 60 seconds... 00:12:37.086 09:50:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:37.086 09:50:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.086 09:50:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.086 [2024-12-06 09:50:02.324348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:37.345 09:50:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.345 09:50:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:37.345 [2024-12-06 09:50:02.370711] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:37.345 [2024-12-06 09:50:02.372706] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:37.345 [2024-12-06 09:50:02.486679] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:37.345 [2024-12-06 09:50:02.487399] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:37.345 [2024-12-06 09:50:02.602920] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:37.345 [2024-12-06 09:50:02.603308] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:37.915 [2024-12-06 09:50:02.940312] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:37.915 167.00 IOPS, 501.00 MiB/s [2024-12-06T09:50:03.188Z] [2024-12-06 09:50:03.160202] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:38.175 09:50:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:38.175 09:50:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.175 09:50:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:38.175 09:50:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:38.175 09:50:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.175 09:50:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.175 09:50:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.175 09:50:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.175 09:50:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.175 [2024-12-06 09:50:03.378271] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:38.175 09:50:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.175 09:50:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.175 "name": "raid_bdev1", 00:12:38.175 "uuid": "059ccd74-3deb-4e9e-8ee6-fee93823052c", 00:12:38.175 "strip_size_kb": 0, 00:12:38.175 "state": "online", 00:12:38.175 "raid_level": "raid1", 00:12:38.175 "superblock": false, 00:12:38.175 "num_base_bdevs": 2, 00:12:38.175 "num_base_bdevs_discovered": 2, 00:12:38.175 "num_base_bdevs_operational": 2, 00:12:38.175 "process": { 00:12:38.175 "type": "rebuild", 00:12:38.175 "target": "spare", 00:12:38.175 "progress": { 00:12:38.175 "blocks": 12288, 00:12:38.175 "percent": 18 00:12:38.175 } 00:12:38.175 }, 00:12:38.175 "base_bdevs_list": [ 00:12:38.175 { 00:12:38.175 "name": "spare", 00:12:38.175 "uuid": "b5ebd6d2-0108-5fcc-b125-4d2094cffc4e", 00:12:38.175 "is_configured": true, 00:12:38.175 "data_offset": 0, 00:12:38.175 "data_size": 65536 00:12:38.175 }, 00:12:38.175 { 00:12:38.175 "name": "BaseBdev2", 00:12:38.175 "uuid": "29fe5c31-f86a-53c6-b7fc-6e0c4fc5c24d", 00:12:38.175 "is_configured": true, 00:12:38.175 "data_offset": 0, 00:12:38.175 "data_size": 65536 00:12:38.175 } 00:12:38.175 ] 00:12:38.175 }' 00:12:38.175 09:50:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.434 09:50:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:38.434 09:50:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.434 [2024-12-06 09:50:03.488415] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:38.434 09:50:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:38.434 09:50:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:38.434 09:50:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.434 09:50:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.434 [2024-12-06 09:50:03.523489] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:38.434 [2024-12-06 09:50:03.642003] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:38.434 [2024-12-06 09:50:03.649634] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.434 [2024-12-06 09:50:03.649671] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:38.434 [2024-12-06 09:50:03.649687] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:38.434 [2024-12-06 09:50:03.692486] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:38.694 09:50:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.694 09:50:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:38.694 09:50:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.694 09:50:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.694 09:50:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.694 09:50:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.694 09:50:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:38.694 09:50:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.694 09:50:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.694 09:50:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.694 09:50:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.694 09:50:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.694 09:50:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.694 09:50:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.694 09:50:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.694 09:50:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.694 09:50:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.694 "name": "raid_bdev1", 00:12:38.694 "uuid": "059ccd74-3deb-4e9e-8ee6-fee93823052c", 00:12:38.694 "strip_size_kb": 0, 00:12:38.694 "state": "online", 00:12:38.694 "raid_level": "raid1", 00:12:38.694 "superblock": false, 00:12:38.694 "num_base_bdevs": 2, 00:12:38.694 "num_base_bdevs_discovered": 1, 00:12:38.694 "num_base_bdevs_operational": 1, 00:12:38.694 "base_bdevs_list": [ 00:12:38.694 { 00:12:38.694 "name": null, 00:12:38.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.694 "is_configured": false, 00:12:38.694 "data_offset": 0, 00:12:38.694 "data_size": 65536 00:12:38.694 }, 00:12:38.694 { 00:12:38.694 "name": "BaseBdev2", 00:12:38.694 "uuid": "29fe5c31-f86a-53c6-b7fc-6e0c4fc5c24d", 00:12:38.694 "is_configured": true, 00:12:38.694 "data_offset": 0, 00:12:38.694 "data_size": 65536 00:12:38.694 } 00:12:38.694 ] 00:12:38.694 }' 00:12:38.694 09:50:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.694 09:50:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.955 164.00 IOPS, 492.00 MiB/s [2024-12-06T09:50:04.228Z] 09:50:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:38.955 09:50:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.955 09:50:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:38.955 09:50:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:38.955 09:50:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.955 09:50:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.955 09:50:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.955 09:50:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.955 09:50:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.955 09:50:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.955 09:50:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.955 "name": "raid_bdev1", 00:12:38.955 "uuid": "059ccd74-3deb-4e9e-8ee6-fee93823052c", 00:12:38.955 "strip_size_kb": 0, 00:12:38.955 "state": "online", 00:12:38.955 "raid_level": "raid1", 00:12:38.955 "superblock": false, 00:12:38.955 "num_base_bdevs": 2, 00:12:38.955 "num_base_bdevs_discovered": 1, 00:12:38.955 "num_base_bdevs_operational": 1, 00:12:38.955 "base_bdevs_list": [ 00:12:38.955 { 00:12:38.955 "name": null, 00:12:38.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.955 "is_configured": false, 00:12:38.955 "data_offset": 0, 00:12:38.955 "data_size": 65536 00:12:38.955 }, 00:12:38.955 { 00:12:38.955 "name": "BaseBdev2", 00:12:38.955 "uuid": "29fe5c31-f86a-53c6-b7fc-6e0c4fc5c24d", 00:12:38.955 "is_configured": true, 00:12:38.955 "data_offset": 0, 00:12:38.955 "data_size": 65536 00:12:38.955 } 00:12:38.955 ] 00:12:38.955 }' 00:12:38.955 09:50:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.955 09:50:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:38.955 09:50:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:39.216 09:50:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:39.216 09:50:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:39.216 09:50:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.216 09:50:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.216 [2024-12-06 09:50:04.272111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:39.216 09:50:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.216 09:50:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:39.216 [2024-12-06 09:50:04.327389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:39.216 [2024-12-06 09:50:04.329379] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:39.216 [2024-12-06 09:50:04.448541] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:39.216 [2024-12-06 09:50:04.449321] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:39.476 [2024-12-06 09:50:04.674846] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:39.476 [2024-12-06 09:50:04.675331] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:40.044 156.00 IOPS, 468.00 MiB/s [2024-12-06T09:50:05.317Z] 09:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:40.044 09:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.044 09:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:40.044 09:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:40.044 09:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.303 09:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.303 09:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.303 09:50:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.303 09:50:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.303 09:50:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.303 [2024-12-06 09:50:05.357959] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:40.303 [2024-12-06 09:50:05.358610] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:40.303 09:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.303 "name": "raid_bdev1", 00:12:40.303 "uuid": "059ccd74-3deb-4e9e-8ee6-fee93823052c", 00:12:40.303 "strip_size_kb": 0, 00:12:40.303 "state": "online", 00:12:40.303 "raid_level": "raid1", 00:12:40.303 "superblock": false, 00:12:40.303 "num_base_bdevs": 2, 00:12:40.303 "num_base_bdevs_discovered": 2, 00:12:40.303 "num_base_bdevs_operational": 2, 00:12:40.303 "process": { 00:12:40.303 "type": "rebuild", 00:12:40.303 "target": "spare", 00:12:40.303 "progress": { 00:12:40.303 "blocks": 12288, 00:12:40.303 "percent": 18 00:12:40.303 } 00:12:40.303 }, 00:12:40.303 "base_bdevs_list": [ 00:12:40.303 { 00:12:40.303 "name": "spare", 00:12:40.303 "uuid": "b5ebd6d2-0108-5fcc-b125-4d2094cffc4e", 00:12:40.303 "is_configured": true, 00:12:40.303 "data_offset": 0, 00:12:40.303 "data_size": 65536 00:12:40.303 }, 00:12:40.303 { 00:12:40.303 "name": "BaseBdev2", 00:12:40.303 "uuid": "29fe5c31-f86a-53c6-b7fc-6e0c4fc5c24d", 00:12:40.303 "is_configured": true, 00:12:40.303 "data_offset": 0, 00:12:40.303 "data_size": 65536 00:12:40.303 } 00:12:40.303 ] 00:12:40.303 }' 00:12:40.303 09:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.303 09:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:40.303 09:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.303 09:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:40.303 09:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:40.303 09:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:40.303 09:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:40.303 09:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:40.303 09:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=399 00:12:40.303 09:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:40.303 09:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:40.303 09:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.303 09:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:40.303 09:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:40.303 09:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.303 09:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.303 09:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.303 09:50:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.303 09:50:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.303 09:50:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.303 09:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.303 "name": "raid_bdev1", 00:12:40.303 "uuid": "059ccd74-3deb-4e9e-8ee6-fee93823052c", 00:12:40.303 "strip_size_kb": 0, 00:12:40.303 "state": "online", 00:12:40.303 "raid_level": "raid1", 00:12:40.303 "superblock": false, 00:12:40.303 "num_base_bdevs": 2, 00:12:40.303 "num_base_bdevs_discovered": 2, 00:12:40.303 "num_base_bdevs_operational": 2, 00:12:40.303 "process": { 00:12:40.303 "type": "rebuild", 00:12:40.303 "target": "spare", 00:12:40.303 "progress": { 00:12:40.303 "blocks": 14336, 00:12:40.303 "percent": 21 00:12:40.303 } 00:12:40.303 }, 00:12:40.303 "base_bdevs_list": [ 00:12:40.303 { 00:12:40.303 "name": "spare", 00:12:40.303 "uuid": "b5ebd6d2-0108-5fcc-b125-4d2094cffc4e", 00:12:40.303 "is_configured": true, 00:12:40.303 "data_offset": 0, 00:12:40.303 "data_size": 65536 00:12:40.303 }, 00:12:40.303 { 00:12:40.303 "name": "BaseBdev2", 00:12:40.303 "uuid": "29fe5c31-f86a-53c6-b7fc-6e0c4fc5c24d", 00:12:40.304 "is_configured": true, 00:12:40.304 "data_offset": 0, 00:12:40.304 "data_size": 65536 00:12:40.304 } 00:12:40.304 ] 00:12:40.304 }' 00:12:40.304 09:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.304 09:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:40.304 09:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.304 [2024-12-06 09:50:05.566522] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:40.304 09:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:40.304 [2024-12-06 09:50:05.566941] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:40.304 09:50:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:40.869 [2024-12-06 09:50:05.886226] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:41.127 138.25 IOPS, 414.75 MiB/s [2024-12-06T09:50:06.400Z] [2024-12-06 09:50:06.299042] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:41.385 [2024-12-06 09:50:06.512076] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:41.385 [2024-12-06 09:50:06.512440] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:41.385 09:50:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:41.385 09:50:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:41.385 09:50:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:41.385 09:50:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:41.385 09:50:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:41.385 09:50:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:41.385 09:50:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.385 09:50:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.385 09:50:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.385 09:50:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.385 09:50:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.385 09:50:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:41.385 "name": "raid_bdev1", 00:12:41.385 "uuid": "059ccd74-3deb-4e9e-8ee6-fee93823052c", 00:12:41.385 "strip_size_kb": 0, 00:12:41.385 "state": "online", 00:12:41.385 "raid_level": "raid1", 00:12:41.385 "superblock": false, 00:12:41.385 "num_base_bdevs": 2, 00:12:41.385 "num_base_bdevs_discovered": 2, 00:12:41.385 "num_base_bdevs_operational": 2, 00:12:41.385 "process": { 00:12:41.385 "type": "rebuild", 00:12:41.385 "target": "spare", 00:12:41.385 "progress": { 00:12:41.385 "blocks": 28672, 00:12:41.385 "percent": 43 00:12:41.385 } 00:12:41.385 }, 00:12:41.385 "base_bdevs_list": [ 00:12:41.385 { 00:12:41.385 "name": "spare", 00:12:41.385 "uuid": "b5ebd6d2-0108-5fcc-b125-4d2094cffc4e", 00:12:41.385 "is_configured": true, 00:12:41.385 "data_offset": 0, 00:12:41.385 "data_size": 65536 00:12:41.385 }, 00:12:41.385 { 00:12:41.385 "name": "BaseBdev2", 00:12:41.385 "uuid": "29fe5c31-f86a-53c6-b7fc-6e0c4fc5c24d", 00:12:41.385 "is_configured": true, 00:12:41.385 "data_offset": 0, 00:12:41.385 "data_size": 65536 00:12:41.385 } 00:12:41.385 ] 00:12:41.385 }' 00:12:41.385 09:50:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:41.385 09:50:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:41.385 09:50:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:41.644 09:50:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:41.644 09:50:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:41.644 [2024-12-06 09:50:06.827255] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:41.903 118.00 IOPS, 354.00 MiB/s [2024-12-06T09:50:07.176Z] [2024-12-06 09:50:07.052856] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:41.903 [2024-12-06 09:50:07.053313] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:42.471 [2024-12-06 09:50:07.634609] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:12:42.471 09:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:42.471 09:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:42.471 09:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.471 09:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:42.471 09:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:42.472 09:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.472 09:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.472 09:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.472 09:50:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.472 09:50:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.472 09:50:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.732 09:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.732 "name": "raid_bdev1", 00:12:42.732 "uuid": "059ccd74-3deb-4e9e-8ee6-fee93823052c", 00:12:42.732 "strip_size_kb": 0, 00:12:42.732 "state": "online", 00:12:42.732 "raid_level": "raid1", 00:12:42.732 "superblock": false, 00:12:42.732 "num_base_bdevs": 2, 00:12:42.732 "num_base_bdevs_discovered": 2, 00:12:42.732 "num_base_bdevs_operational": 2, 00:12:42.732 "process": { 00:12:42.732 "type": "rebuild", 00:12:42.732 "target": "spare", 00:12:42.732 "progress": { 00:12:42.732 "blocks": 45056, 00:12:42.732 "percent": 68 00:12:42.732 } 00:12:42.732 }, 00:12:42.732 "base_bdevs_list": [ 00:12:42.732 { 00:12:42.732 "name": "spare", 00:12:42.732 "uuid": "b5ebd6d2-0108-5fcc-b125-4d2094cffc4e", 00:12:42.732 "is_configured": true, 00:12:42.732 "data_offset": 0, 00:12:42.732 "data_size": 65536 00:12:42.732 }, 00:12:42.732 { 00:12:42.732 "name": "BaseBdev2", 00:12:42.732 "uuid": "29fe5c31-f86a-53c6-b7fc-6e0c4fc5c24d", 00:12:42.732 "is_configured": true, 00:12:42.732 "data_offset": 0, 00:12:42.732 "data_size": 65536 00:12:42.732 } 00:12:42.732 ] 00:12:42.732 }' 00:12:42.732 09:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.732 09:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:42.732 09:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.732 [2024-12-06 09:50:07.848610] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:42.732 09:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:42.732 09:50:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:42.992 105.00 IOPS, 315.00 MiB/s [2024-12-06T09:50:08.265Z] [2024-12-06 09:50:08.172511] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:12:43.250 [2024-12-06 09:50:08.503854] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:12:43.821 09:50:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:43.821 09:50:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:43.821 09:50:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:43.821 09:50:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:43.821 09:50:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:43.821 09:50:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:43.821 09:50:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.821 09:50:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.821 09:50:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.821 09:50:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.821 09:50:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.821 09:50:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:43.821 "name": "raid_bdev1", 00:12:43.821 "uuid": "059ccd74-3deb-4e9e-8ee6-fee93823052c", 00:12:43.821 "strip_size_kb": 0, 00:12:43.821 "state": "online", 00:12:43.821 "raid_level": "raid1", 00:12:43.821 "superblock": false, 00:12:43.821 "num_base_bdevs": 2, 00:12:43.821 "num_base_bdevs_discovered": 2, 00:12:43.821 "num_base_bdevs_operational": 2, 00:12:43.821 "process": { 00:12:43.821 "type": "rebuild", 00:12:43.821 "target": "spare", 00:12:43.821 "progress": { 00:12:43.821 "blocks": 63488, 00:12:43.821 "percent": 96 00:12:43.821 } 00:12:43.821 }, 00:12:43.821 "base_bdevs_list": [ 00:12:43.821 { 00:12:43.821 "name": "spare", 00:12:43.821 "uuid": "b5ebd6d2-0108-5fcc-b125-4d2094cffc4e", 00:12:43.821 "is_configured": true, 00:12:43.821 "data_offset": 0, 00:12:43.821 "data_size": 65536 00:12:43.821 }, 00:12:43.821 { 00:12:43.821 "name": "BaseBdev2", 00:12:43.821 "uuid": "29fe5c31-f86a-53c6-b7fc-6e0c4fc5c24d", 00:12:43.821 "is_configured": true, 00:12:43.821 "data_offset": 0, 00:12:43.821 "data_size": 65536 00:12:43.821 } 00:12:43.821 ] 00:12:43.821 }' 00:12:43.821 09:50:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:43.821 [2024-12-06 09:50:08.939670] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:43.821 09:50:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:43.821 09:50:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:43.821 94.43 IOPS, 283.29 MiB/s [2024-12-06T09:50:09.094Z] 09:50:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:43.821 09:50:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:43.821 [2024-12-06 09:50:09.045287] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:43.821 [2024-12-06 09:50:09.047674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.762 87.00 IOPS, 261.00 MiB/s [2024-12-06T09:50:10.035Z] 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:44.762 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:44.762 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:44.762 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:44.762 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:44.762 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:44.762 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.762 09:50:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.762 09:50:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.762 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.022 09:50:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.022 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.022 "name": "raid_bdev1", 00:12:45.022 "uuid": "059ccd74-3deb-4e9e-8ee6-fee93823052c", 00:12:45.022 "strip_size_kb": 0, 00:12:45.022 "state": "online", 00:12:45.022 "raid_level": "raid1", 00:12:45.022 "superblock": false, 00:12:45.022 "num_base_bdevs": 2, 00:12:45.022 "num_base_bdevs_discovered": 2, 00:12:45.022 "num_base_bdevs_operational": 2, 00:12:45.022 "base_bdevs_list": [ 00:12:45.022 { 00:12:45.022 "name": "spare", 00:12:45.022 "uuid": "b5ebd6d2-0108-5fcc-b125-4d2094cffc4e", 00:12:45.022 "is_configured": true, 00:12:45.022 "data_offset": 0, 00:12:45.022 "data_size": 65536 00:12:45.022 }, 00:12:45.022 { 00:12:45.022 "name": "BaseBdev2", 00:12:45.022 "uuid": "29fe5c31-f86a-53c6-b7fc-6e0c4fc5c24d", 00:12:45.022 "is_configured": true, 00:12:45.022 "data_offset": 0, 00:12:45.022 "data_size": 65536 00:12:45.022 } 00:12:45.022 ] 00:12:45.022 }' 00:12:45.022 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.022 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:45.022 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.022 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:45.022 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:45.022 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:45.022 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.022 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:45.022 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:45.022 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.022 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.022 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.022 09:50:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.022 09:50:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.022 09:50:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.022 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.022 "name": "raid_bdev1", 00:12:45.022 "uuid": "059ccd74-3deb-4e9e-8ee6-fee93823052c", 00:12:45.022 "strip_size_kb": 0, 00:12:45.022 "state": "online", 00:12:45.022 "raid_level": "raid1", 00:12:45.022 "superblock": false, 00:12:45.022 "num_base_bdevs": 2, 00:12:45.023 "num_base_bdevs_discovered": 2, 00:12:45.023 "num_base_bdevs_operational": 2, 00:12:45.023 "base_bdevs_list": [ 00:12:45.023 { 00:12:45.023 "name": "spare", 00:12:45.023 "uuid": "b5ebd6d2-0108-5fcc-b125-4d2094cffc4e", 00:12:45.023 "is_configured": true, 00:12:45.023 "data_offset": 0, 00:12:45.023 "data_size": 65536 00:12:45.023 }, 00:12:45.023 { 00:12:45.023 "name": "BaseBdev2", 00:12:45.023 "uuid": "29fe5c31-f86a-53c6-b7fc-6e0c4fc5c24d", 00:12:45.023 "is_configured": true, 00:12:45.023 "data_offset": 0, 00:12:45.023 "data_size": 65536 00:12:45.023 } 00:12:45.023 ] 00:12:45.023 }' 00:12:45.023 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.023 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:45.023 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.283 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:45.283 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:45.283 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.283 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.283 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.283 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.283 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:45.283 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.283 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.283 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.283 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.283 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.283 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.283 09:50:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.283 09:50:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.283 09:50:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.283 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.283 "name": "raid_bdev1", 00:12:45.283 "uuid": "059ccd74-3deb-4e9e-8ee6-fee93823052c", 00:12:45.283 "strip_size_kb": 0, 00:12:45.283 "state": "online", 00:12:45.283 "raid_level": "raid1", 00:12:45.283 "superblock": false, 00:12:45.283 "num_base_bdevs": 2, 00:12:45.283 "num_base_bdevs_discovered": 2, 00:12:45.283 "num_base_bdevs_operational": 2, 00:12:45.283 "base_bdevs_list": [ 00:12:45.283 { 00:12:45.283 "name": "spare", 00:12:45.283 "uuid": "b5ebd6d2-0108-5fcc-b125-4d2094cffc4e", 00:12:45.283 "is_configured": true, 00:12:45.283 "data_offset": 0, 00:12:45.283 "data_size": 65536 00:12:45.283 }, 00:12:45.283 { 00:12:45.283 "name": "BaseBdev2", 00:12:45.283 "uuid": "29fe5c31-f86a-53c6-b7fc-6e0c4fc5c24d", 00:12:45.283 "is_configured": true, 00:12:45.283 "data_offset": 0, 00:12:45.283 "data_size": 65536 00:12:45.283 } 00:12:45.283 ] 00:12:45.283 }' 00:12:45.283 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.283 09:50:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.543 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:45.543 09:50:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.543 09:50:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.543 [2024-12-06 09:50:10.731564] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:45.543 [2024-12-06 09:50:10.731596] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:45.543 00:12:45.543 Latency(us) 00:12:45.543 [2024-12-06T09:50:10.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:45.543 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:45.543 raid_bdev1 : 8.77 83.13 249.38 0.00 0.00 16644.24 316.59 109436.53 00:12:45.543 [2024-12-06T09:50:10.816Z] =================================================================================================================== 00:12:45.543 [2024-12-06T09:50:10.816Z] Total : 83.13 249.38 0.00 0.00 16644.24 316.59 109436.53 00:12:45.543 [2024-12-06 09:50:10.788229] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:45.543 [2024-12-06 09:50:10.788350] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.544 [2024-12-06 09:50:10.788452] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:45.544 [2024-12-06 09:50:10.788532] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:45.544 { 00:12:45.544 "results": [ 00:12:45.544 { 00:12:45.544 "job": "raid_bdev1", 00:12:45.544 "core_mask": "0x1", 00:12:45.544 "workload": "randrw", 00:12:45.544 "percentage": 50, 00:12:45.544 "status": "finished", 00:12:45.544 "queue_depth": 2, 00:12:45.544 "io_size": 3145728, 00:12:45.544 "runtime": 8.769893, 00:12:45.544 "iops": 83.12530152876438, 00:12:45.544 "mibps": 249.37590458629313, 00:12:45.544 "io_failed": 0, 00:12:45.544 "io_timeout": 0, 00:12:45.544 "avg_latency_us": 16644.238220688745, 00:12:45.544 "min_latency_us": 316.5903930131004, 00:12:45.544 "max_latency_us": 109436.5344978166 00:12:45.544 } 00:12:45.544 ], 00:12:45.544 "core_count": 1 00:12:45.544 } 00:12:45.544 09:50:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.544 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.544 09:50:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.544 09:50:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.544 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:45.544 09:50:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.803 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:45.803 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:45.803 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:45.803 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:45.803 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:45.803 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:45.803 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:45.803 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:45.803 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:45.803 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:45.803 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:45.803 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:45.803 09:50:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:45.803 /dev/nbd0 00:12:46.062 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:46.062 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:46.063 09:50:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:46.063 09:50:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:46.063 09:50:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:46.063 09:50:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:46.063 09:50:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:46.063 09:50:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:46.063 09:50:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:46.063 09:50:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:46.063 09:50:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.063 1+0 records in 00:12:46.063 1+0 records out 00:12:46.063 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530719 s, 7.7 MB/s 00:12:46.063 09:50:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.063 09:50:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:46.063 09:50:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.063 09:50:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:46.063 09:50:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:46.063 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:46.063 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:46.063 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:46.063 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:46.063 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:46.063 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:46.063 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:46.063 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:46.063 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:46.063 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:46.063 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:46.063 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:46.063 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:46.063 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:46.063 /dev/nbd1 00:12:46.323 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:46.323 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:46.323 09:50:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:46.323 09:50:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:46.323 09:50:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:46.323 09:50:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:46.323 09:50:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:46.323 09:50:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:46.323 09:50:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:46.323 09:50:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:46.323 09:50:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.323 1+0 records in 00:12:46.323 1+0 records out 00:12:46.323 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000543498 s, 7.5 MB/s 00:12:46.323 09:50:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.323 09:50:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:46.323 09:50:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.323 09:50:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:46.323 09:50:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:46.323 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:46.323 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:46.323 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:46.323 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:46.323 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:46.323 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:46.323 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:46.323 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:46.323 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.323 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:46.582 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:46.582 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:46.582 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:46.582 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.582 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.582 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:46.582 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:46.582 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.582 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:46.582 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:46.582 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:46.582 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:46.582 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:46.582 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.582 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:46.841 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:46.841 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:46.841 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:46.841 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.841 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.841 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:46.841 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:46.841 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.841 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:46.841 09:50:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76355 00:12:46.841 09:50:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76355 ']' 00:12:46.841 09:50:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76355 00:12:46.841 09:50:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:12:46.841 09:50:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:46.841 09:50:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76355 00:12:46.841 killing process with pid 76355 00:12:46.841 Received shutdown signal, test time was about 10.031485 seconds 00:12:46.841 00:12:46.841 Latency(us) 00:12:46.841 [2024-12-06T09:50:12.114Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:46.841 [2024-12-06T09:50:12.114Z] =================================================================================================================== 00:12:46.841 [2024-12-06T09:50:12.114Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:46.841 09:50:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:46.841 09:50:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:46.841 09:50:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76355' 00:12:46.841 09:50:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76355 00:12:46.841 09:50:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76355 00:12:46.841 [2024-12-06 09:50:12.026606] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:47.100 [2024-12-06 09:50:12.261996] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:48.481 09:50:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:48.481 00:12:48.481 real 0m13.152s 00:12:48.481 user 0m16.372s 00:12:48.481 sys 0m1.433s 00:12:48.481 09:50:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:48.481 09:50:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.481 ************************************ 00:12:48.481 END TEST raid_rebuild_test_io 00:12:48.481 ************************************ 00:12:48.481 09:50:13 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:12:48.481 09:50:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:48.481 09:50:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:48.481 09:50:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:48.481 ************************************ 00:12:48.481 START TEST raid_rebuild_test_sb_io 00:12:48.481 ************************************ 00:12:48.481 09:50:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:12:48.481 09:50:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:48.481 09:50:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:48.481 09:50:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:48.481 09:50:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:48.481 09:50:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:48.481 09:50:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:48.481 09:50:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:48.481 09:50:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:48.481 09:50:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:48.481 09:50:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:48.481 09:50:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:48.481 09:50:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:48.481 09:50:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:48.481 09:50:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:48.481 09:50:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:48.482 09:50:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:48.482 09:50:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:48.482 09:50:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:48.482 09:50:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:48.482 09:50:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:48.482 09:50:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:48.482 09:50:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:48.482 09:50:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:48.482 09:50:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:48.482 09:50:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76750 00:12:48.482 09:50:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:48.482 09:50:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76750 00:12:48.482 09:50:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 76750 ']' 00:12:48.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.482 09:50:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.482 09:50:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:48.482 09:50:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.482 09:50:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:48.482 09:50:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.482 [2024-12-06 09:50:13.596179] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:12:48.482 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:48.482 Zero copy mechanism will not be used. 00:12:48.482 [2024-12-06 09:50:13.596362] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76750 ] 00:12:48.741 [2024-12-06 09:50:13.753610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.741 [2024-12-06 09:50:13.866185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.001 [2024-12-06 09:50:14.063871] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:49.001 [2024-12-06 09:50:14.063909] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:49.260 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:49.260 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:12:49.260 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:49.260 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:49.260 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.260 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.260 BaseBdev1_malloc 00:12:49.260 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.260 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:49.260 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.260 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.260 [2024-12-06 09:50:14.474165] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:49.260 [2024-12-06 09:50:14.474229] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.260 [2024-12-06 09:50:14.474255] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:49.260 [2024-12-06 09:50:14.474268] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.260 [2024-12-06 09:50:14.476751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.260 [2024-12-06 09:50:14.476788] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:49.260 BaseBdev1 00:12:49.260 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.260 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:49.260 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:49.260 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.260 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.260 BaseBdev2_malloc 00:12:49.260 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.260 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:49.260 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.260 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.260 [2024-12-06 09:50:14.530971] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:49.260 [2024-12-06 09:50:14.531050] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.261 [2024-12-06 09:50:14.531073] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:49.261 [2024-12-06 09:50:14.531083] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.520 [2024-12-06 09:50:14.533082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.520 [2024-12-06 09:50:14.533120] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:49.520 BaseBdev2 00:12:49.520 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.520 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:49.520 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.520 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.520 spare_malloc 00:12:49.520 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.520 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:49.520 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.520 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.520 spare_delay 00:12:49.520 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.520 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:49.520 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.520 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.520 [2024-12-06 09:50:14.609730] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:49.520 [2024-12-06 09:50:14.609788] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.520 [2024-12-06 09:50:14.609807] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:49.520 [2024-12-06 09:50:14.609818] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.520 [2024-12-06 09:50:14.611845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.520 [2024-12-06 09:50:14.611882] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:49.520 spare 00:12:49.520 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.520 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:49.520 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.520 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.520 [2024-12-06 09:50:14.621778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:49.520 [2024-12-06 09:50:14.623597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:49.520 [2024-12-06 09:50:14.623772] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:49.520 [2024-12-06 09:50:14.623786] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:49.520 [2024-12-06 09:50:14.624071] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:49.520 [2024-12-06 09:50:14.624282] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:49.520 [2024-12-06 09:50:14.624302] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:49.520 [2024-12-06 09:50:14.624471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:49.520 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.520 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:49.520 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.520 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.520 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.520 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.520 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:49.520 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.520 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.520 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.520 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.520 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.520 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.520 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.520 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.520 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.520 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.520 "name": "raid_bdev1", 00:12:49.520 "uuid": "83684b4c-d4df-4a2c-9788-622c1cd73243", 00:12:49.520 "strip_size_kb": 0, 00:12:49.520 "state": "online", 00:12:49.520 "raid_level": "raid1", 00:12:49.520 "superblock": true, 00:12:49.520 "num_base_bdevs": 2, 00:12:49.520 "num_base_bdevs_discovered": 2, 00:12:49.520 "num_base_bdevs_operational": 2, 00:12:49.520 "base_bdevs_list": [ 00:12:49.520 { 00:12:49.520 "name": "BaseBdev1", 00:12:49.520 "uuid": "91febac5-a2d4-5278-afc1-d40a0b88d3b4", 00:12:49.520 "is_configured": true, 00:12:49.520 "data_offset": 2048, 00:12:49.520 "data_size": 63488 00:12:49.520 }, 00:12:49.520 { 00:12:49.520 "name": "BaseBdev2", 00:12:49.520 "uuid": "d0eb44f9-eb2f-5bef-8b5a-40fd299e71a7", 00:12:49.520 "is_configured": true, 00:12:49.520 "data_offset": 2048, 00:12:49.520 "data_size": 63488 00:12:49.520 } 00:12:49.520 ] 00:12:49.520 }' 00:12:49.520 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.520 09:50:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.779 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:49.779 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:49.779 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.779 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.780 [2024-12-06 09:50:15.045335] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:50.038 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.038 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:50.038 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.038 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.038 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.038 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:50.038 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.038 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:50.038 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:50.038 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:50.038 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:50.038 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.038 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.038 [2024-12-06 09:50:15.144872] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:50.038 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.038 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:50.038 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.038 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.038 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.038 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.038 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:50.038 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.038 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.038 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.038 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.038 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.038 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.038 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.038 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.038 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.038 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.038 "name": "raid_bdev1", 00:12:50.038 "uuid": "83684b4c-d4df-4a2c-9788-622c1cd73243", 00:12:50.038 "strip_size_kb": 0, 00:12:50.038 "state": "online", 00:12:50.038 "raid_level": "raid1", 00:12:50.038 "superblock": true, 00:12:50.038 "num_base_bdevs": 2, 00:12:50.038 "num_base_bdevs_discovered": 1, 00:12:50.038 "num_base_bdevs_operational": 1, 00:12:50.038 "base_bdevs_list": [ 00:12:50.038 { 00:12:50.038 "name": null, 00:12:50.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.038 "is_configured": false, 00:12:50.038 "data_offset": 0, 00:12:50.038 "data_size": 63488 00:12:50.038 }, 00:12:50.038 { 00:12:50.038 "name": "BaseBdev2", 00:12:50.038 "uuid": "d0eb44f9-eb2f-5bef-8b5a-40fd299e71a7", 00:12:50.038 "is_configured": true, 00:12:50.038 "data_offset": 2048, 00:12:50.038 "data_size": 63488 00:12:50.038 } 00:12:50.038 ] 00:12:50.038 }' 00:12:50.038 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.038 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.038 [2024-12-06 09:50:15.244513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:50.038 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:50.038 Zero copy mechanism will not be used. 00:12:50.038 Running I/O for 60 seconds... 00:12:50.618 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:50.618 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.618 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.618 [2024-12-06 09:50:15.576197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:50.618 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.618 09:50:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:50.618 [2024-12-06 09:50:15.625418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:50.618 [2024-12-06 09:50:15.627304] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:50.618 [2024-12-06 09:50:15.740995] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:50.618 [2024-12-06 09:50:15.741577] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:50.876 [2024-12-06 09:50:15.955818] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:50.876 [2024-12-06 09:50:15.956204] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:51.134 [2024-12-06 09:50:16.193459] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:51.134 217.00 IOPS, 651.00 MiB/s [2024-12-06T09:50:16.407Z] [2024-12-06 09:50:16.319376] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:51.393 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:51.393 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:51.393 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:51.393 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:51.393 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:51.393 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.393 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.393 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.393 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.393 [2024-12-06 09:50:16.650624] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:51.393 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.653 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.653 "name": "raid_bdev1", 00:12:51.653 "uuid": "83684b4c-d4df-4a2c-9788-622c1cd73243", 00:12:51.653 "strip_size_kb": 0, 00:12:51.653 "state": "online", 00:12:51.653 "raid_level": "raid1", 00:12:51.653 "superblock": true, 00:12:51.653 "num_base_bdevs": 2, 00:12:51.653 "num_base_bdevs_discovered": 2, 00:12:51.653 "num_base_bdevs_operational": 2, 00:12:51.653 "process": { 00:12:51.653 "type": "rebuild", 00:12:51.653 "target": "spare", 00:12:51.653 "progress": { 00:12:51.653 "blocks": 12288, 00:12:51.653 "percent": 19 00:12:51.653 } 00:12:51.653 }, 00:12:51.653 "base_bdevs_list": [ 00:12:51.653 { 00:12:51.653 "name": "spare", 00:12:51.653 "uuid": "a2edf81f-46fe-586d-aa73-5860d7ee56ee", 00:12:51.653 "is_configured": true, 00:12:51.653 "data_offset": 2048, 00:12:51.653 "data_size": 63488 00:12:51.653 }, 00:12:51.653 { 00:12:51.653 "name": "BaseBdev2", 00:12:51.653 "uuid": "d0eb44f9-eb2f-5bef-8b5a-40fd299e71a7", 00:12:51.653 "is_configured": true, 00:12:51.653 "data_offset": 2048, 00:12:51.653 "data_size": 63488 00:12:51.653 } 00:12:51.653 ] 00:12:51.653 }' 00:12:51.653 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.653 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:51.653 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.653 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:51.653 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:51.653 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.653 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.653 [2024-12-06 09:50:16.777466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:51.653 [2024-12-06 09:50:16.884519] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:51.653 [2024-12-06 09:50:16.887334] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.654 [2024-12-06 09:50:16.887385] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:51.654 [2024-12-06 09:50:16.887400] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:51.654 [2024-12-06 09:50:16.923691] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:51.914 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.914 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:51.914 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:51.914 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.914 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:51.914 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:51.914 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:51.914 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.914 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.914 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.914 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.914 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.914 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.914 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.914 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.914 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.914 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.914 "name": "raid_bdev1", 00:12:51.914 "uuid": "83684b4c-d4df-4a2c-9788-622c1cd73243", 00:12:51.914 "strip_size_kb": 0, 00:12:51.914 "state": "online", 00:12:51.914 "raid_level": "raid1", 00:12:51.914 "superblock": true, 00:12:51.914 "num_base_bdevs": 2, 00:12:51.914 "num_base_bdevs_discovered": 1, 00:12:51.914 "num_base_bdevs_operational": 1, 00:12:51.914 "base_bdevs_list": [ 00:12:51.914 { 00:12:51.914 "name": null, 00:12:51.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.914 "is_configured": false, 00:12:51.914 "data_offset": 0, 00:12:51.914 "data_size": 63488 00:12:51.914 }, 00:12:51.914 { 00:12:51.914 "name": "BaseBdev2", 00:12:51.914 "uuid": "d0eb44f9-eb2f-5bef-8b5a-40fd299e71a7", 00:12:51.914 "is_configured": true, 00:12:51.914 "data_offset": 2048, 00:12:51.914 "data_size": 63488 00:12:51.914 } 00:12:51.914 ] 00:12:51.914 }' 00:12:51.914 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.914 09:50:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.174 184.50 IOPS, 553.50 MiB/s [2024-12-06T09:50:17.447Z] 09:50:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:52.174 09:50:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.174 09:50:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:52.174 09:50:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:52.174 09:50:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.174 09:50:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.174 09:50:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.174 09:50:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.174 09:50:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.174 09:50:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.174 09:50:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.174 "name": "raid_bdev1", 00:12:52.174 "uuid": "83684b4c-d4df-4a2c-9788-622c1cd73243", 00:12:52.174 "strip_size_kb": 0, 00:12:52.174 "state": "online", 00:12:52.174 "raid_level": "raid1", 00:12:52.174 "superblock": true, 00:12:52.174 "num_base_bdevs": 2, 00:12:52.174 "num_base_bdevs_discovered": 1, 00:12:52.174 "num_base_bdevs_operational": 1, 00:12:52.174 "base_bdevs_list": [ 00:12:52.174 { 00:12:52.175 "name": null, 00:12:52.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.175 "is_configured": false, 00:12:52.175 "data_offset": 0, 00:12:52.175 "data_size": 63488 00:12:52.175 }, 00:12:52.175 { 00:12:52.175 "name": "BaseBdev2", 00:12:52.175 "uuid": "d0eb44f9-eb2f-5bef-8b5a-40fd299e71a7", 00:12:52.175 "is_configured": true, 00:12:52.175 "data_offset": 2048, 00:12:52.175 "data_size": 63488 00:12:52.175 } 00:12:52.175 ] 00:12:52.175 }' 00:12:52.175 09:50:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.435 09:50:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:52.435 09:50:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.435 09:50:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:52.435 09:50:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:52.435 09:50:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.435 09:50:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.435 [2024-12-06 09:50:17.501602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:52.435 09:50:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.435 09:50:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:52.435 [2024-12-06 09:50:17.539884] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:52.435 [2024-12-06 09:50:17.541696] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:52.435 [2024-12-06 09:50:17.644411] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:52.435 [2024-12-06 09:50:17.645005] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:52.695 [2024-12-06 09:50:17.870985] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:52.695 [2024-12-06 09:50:17.871341] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:53.264 175.00 IOPS, 525.00 MiB/s [2024-12-06T09:50:18.537Z] [2024-12-06 09:50:18.372930] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:53.265 [2024-12-06 09:50:18.373297] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:53.524 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:53.524 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:53.524 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:53.524 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:53.524 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:53.524 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.524 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.524 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.524 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.524 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.524 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.524 "name": "raid_bdev1", 00:12:53.524 "uuid": "83684b4c-d4df-4a2c-9788-622c1cd73243", 00:12:53.524 "strip_size_kb": 0, 00:12:53.524 "state": "online", 00:12:53.524 "raid_level": "raid1", 00:12:53.524 "superblock": true, 00:12:53.524 "num_base_bdevs": 2, 00:12:53.524 "num_base_bdevs_discovered": 2, 00:12:53.524 "num_base_bdevs_operational": 2, 00:12:53.524 "process": { 00:12:53.524 "type": "rebuild", 00:12:53.524 "target": "spare", 00:12:53.524 "progress": { 00:12:53.524 "blocks": 10240, 00:12:53.524 "percent": 16 00:12:53.524 } 00:12:53.524 }, 00:12:53.524 "base_bdevs_list": [ 00:12:53.524 { 00:12:53.524 "name": "spare", 00:12:53.524 "uuid": "a2edf81f-46fe-586d-aa73-5860d7ee56ee", 00:12:53.524 "is_configured": true, 00:12:53.524 "data_offset": 2048, 00:12:53.524 "data_size": 63488 00:12:53.524 }, 00:12:53.524 { 00:12:53.524 "name": "BaseBdev2", 00:12:53.524 "uuid": "d0eb44f9-eb2f-5bef-8b5a-40fd299e71a7", 00:12:53.524 "is_configured": true, 00:12:53.524 "data_offset": 2048, 00:12:53.524 "data_size": 63488 00:12:53.524 } 00:12:53.524 ] 00:12:53.524 }' 00:12:53.524 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:53.524 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:53.524 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:53.524 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:53.524 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:53.524 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:53.524 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:53.524 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:53.524 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:53.524 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:53.524 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=412 00:12:53.524 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:53.524 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:53.524 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:53.524 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:53.524 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:53.524 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:53.524 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.524 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.524 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.524 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.524 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.524 [2024-12-06 09:50:18.704906] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:53.524 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.525 "name": "raid_bdev1", 00:12:53.525 "uuid": "83684b4c-d4df-4a2c-9788-622c1cd73243", 00:12:53.525 "strip_size_kb": 0, 00:12:53.525 "state": "online", 00:12:53.525 "raid_level": "raid1", 00:12:53.525 "superblock": true, 00:12:53.525 "num_base_bdevs": 2, 00:12:53.525 "num_base_bdevs_discovered": 2, 00:12:53.525 "num_base_bdevs_operational": 2, 00:12:53.525 "process": { 00:12:53.525 "type": "rebuild", 00:12:53.525 "target": "spare", 00:12:53.525 "progress": { 00:12:53.525 "blocks": 12288, 00:12:53.525 "percent": 19 00:12:53.525 } 00:12:53.525 }, 00:12:53.525 "base_bdevs_list": [ 00:12:53.525 { 00:12:53.525 "name": "spare", 00:12:53.525 "uuid": "a2edf81f-46fe-586d-aa73-5860d7ee56ee", 00:12:53.525 "is_configured": true, 00:12:53.525 "data_offset": 2048, 00:12:53.525 "data_size": 63488 00:12:53.525 }, 00:12:53.525 { 00:12:53.525 "name": "BaseBdev2", 00:12:53.525 "uuid": "d0eb44f9-eb2f-5bef-8b5a-40fd299e71a7", 00:12:53.525 "is_configured": true, 00:12:53.525 "data_offset": 2048, 00:12:53.525 "data_size": 63488 00:12:53.525 } 00:12:53.525 ] 00:12:53.525 }' 00:12:53.525 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:53.525 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:53.525 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:53.785 [2024-12-06 09:50:18.815293] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:53.785 [2024-12-06 09:50:18.815625] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:53.785 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:53.785 09:50:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:54.045 [2024-12-06 09:50:19.145243] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:54.045 159.50 IOPS, 478.50 MiB/s [2024-12-06T09:50:19.318Z] [2024-12-06 09:50:19.267737] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:54.616 [2024-12-06 09:50:19.590941] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:54.616 [2024-12-06 09:50:19.800521] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:54.616 09:50:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:54.616 09:50:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:54.616 09:50:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.616 09:50:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:54.616 09:50:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:54.616 09:50:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.616 09:50:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.616 09:50:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.616 09:50:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.616 09:50:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.616 09:50:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.616 09:50:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.616 "name": "raid_bdev1", 00:12:54.616 "uuid": "83684b4c-d4df-4a2c-9788-622c1cd73243", 00:12:54.616 "strip_size_kb": 0, 00:12:54.616 "state": "online", 00:12:54.616 "raid_level": "raid1", 00:12:54.616 "superblock": true, 00:12:54.616 "num_base_bdevs": 2, 00:12:54.616 "num_base_bdevs_discovered": 2, 00:12:54.616 "num_base_bdevs_operational": 2, 00:12:54.616 "process": { 00:12:54.616 "type": "rebuild", 00:12:54.616 "target": "spare", 00:12:54.616 "progress": { 00:12:54.616 "blocks": 28672, 00:12:54.616 "percent": 45 00:12:54.616 } 00:12:54.616 }, 00:12:54.616 "base_bdevs_list": [ 00:12:54.616 { 00:12:54.616 "name": "spare", 00:12:54.616 "uuid": "a2edf81f-46fe-586d-aa73-5860d7ee56ee", 00:12:54.616 "is_configured": true, 00:12:54.616 "data_offset": 2048, 00:12:54.616 "data_size": 63488 00:12:54.616 }, 00:12:54.616 { 00:12:54.616 "name": "BaseBdev2", 00:12:54.616 "uuid": "d0eb44f9-eb2f-5bef-8b5a-40fd299e71a7", 00:12:54.616 "is_configured": true, 00:12:54.616 "data_offset": 2048, 00:12:54.616 "data_size": 63488 00:12:54.616 } 00:12:54.616 ] 00:12:54.616 }' 00:12:54.616 09:50:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.875 09:50:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:54.875 09:50:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.875 09:50:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:54.875 09:50:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:55.135 137.80 IOPS, 413.40 MiB/s [2024-12-06T09:50:20.408Z] [2024-12-06 09:50:20.280204] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:55.705 09:50:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:55.705 09:50:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:55.965 09:50:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:55.965 09:50:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:55.965 09:50:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:55.965 09:50:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:55.965 09:50:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.965 09:50:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.965 09:50:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.965 09:50:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.965 09:50:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.965 09:50:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.965 "name": "raid_bdev1", 00:12:55.965 "uuid": "83684b4c-d4df-4a2c-9788-622c1cd73243", 00:12:55.965 "strip_size_kb": 0, 00:12:55.965 "state": "online", 00:12:55.965 "raid_level": "raid1", 00:12:55.965 "superblock": true, 00:12:55.965 "num_base_bdevs": 2, 00:12:55.965 "num_base_bdevs_discovered": 2, 00:12:55.965 "num_base_bdevs_operational": 2, 00:12:55.965 "process": { 00:12:55.965 "type": "rebuild", 00:12:55.965 "target": "spare", 00:12:55.965 "progress": { 00:12:55.965 "blocks": 47104, 00:12:55.965 "percent": 74 00:12:55.965 } 00:12:55.965 }, 00:12:55.965 "base_bdevs_list": [ 00:12:55.965 { 00:12:55.965 "name": "spare", 00:12:55.965 "uuid": "a2edf81f-46fe-586d-aa73-5860d7ee56ee", 00:12:55.965 "is_configured": true, 00:12:55.965 "data_offset": 2048, 00:12:55.965 "data_size": 63488 00:12:55.965 }, 00:12:55.965 { 00:12:55.965 "name": "BaseBdev2", 00:12:55.965 "uuid": "d0eb44f9-eb2f-5bef-8b5a-40fd299e71a7", 00:12:55.965 "is_configured": true, 00:12:55.965 "data_offset": 2048, 00:12:55.965 "data_size": 63488 00:12:55.965 } 00:12:55.965 ] 00:12:55.965 }' 00:12:55.965 09:50:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:55.965 09:50:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:55.965 09:50:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:55.965 09:50:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:55.965 09:50:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:55.965 [2024-12-06 09:50:21.166871] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:12:56.225 123.00 IOPS, 369.00 MiB/s [2024-12-06T09:50:21.498Z] [2024-12-06 09:50:21.274027] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:12:56.484 [2024-12-06 09:50:21.602030] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:56.744 [2024-12-06 09:50:21.926663] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:57.003 [2024-12-06 09:50:22.026466] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:57.003 [2024-12-06 09:50:22.028805] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.003 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:57.003 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:57.003 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.003 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:57.003 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:57.003 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.004 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.004 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.004 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.004 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.004 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.004 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.004 "name": "raid_bdev1", 00:12:57.004 "uuid": "83684b4c-d4df-4a2c-9788-622c1cd73243", 00:12:57.004 "strip_size_kb": 0, 00:12:57.004 "state": "online", 00:12:57.004 "raid_level": "raid1", 00:12:57.004 "superblock": true, 00:12:57.004 "num_base_bdevs": 2, 00:12:57.004 "num_base_bdevs_discovered": 2, 00:12:57.004 "num_base_bdevs_operational": 2, 00:12:57.004 "base_bdevs_list": [ 00:12:57.004 { 00:12:57.004 "name": "spare", 00:12:57.004 "uuid": "a2edf81f-46fe-586d-aa73-5860d7ee56ee", 00:12:57.004 "is_configured": true, 00:12:57.004 "data_offset": 2048, 00:12:57.004 "data_size": 63488 00:12:57.004 }, 00:12:57.004 { 00:12:57.004 "name": "BaseBdev2", 00:12:57.004 "uuid": "d0eb44f9-eb2f-5bef-8b5a-40fd299e71a7", 00:12:57.004 "is_configured": true, 00:12:57.004 "data_offset": 2048, 00:12:57.004 "data_size": 63488 00:12:57.004 } 00:12:57.004 ] 00:12:57.004 }' 00:12:57.004 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.004 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:57.004 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.004 109.43 IOPS, 328.29 MiB/s [2024-12-06T09:50:22.277Z] 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:57.004 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:12:57.264 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:57.264 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.264 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:57.264 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:57.264 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.264 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.264 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.264 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.264 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.264 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.264 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.264 "name": "raid_bdev1", 00:12:57.264 "uuid": "83684b4c-d4df-4a2c-9788-622c1cd73243", 00:12:57.264 "strip_size_kb": 0, 00:12:57.264 "state": "online", 00:12:57.264 "raid_level": "raid1", 00:12:57.264 "superblock": true, 00:12:57.264 "num_base_bdevs": 2, 00:12:57.264 "num_base_bdevs_discovered": 2, 00:12:57.264 "num_base_bdevs_operational": 2, 00:12:57.264 "base_bdevs_list": [ 00:12:57.264 { 00:12:57.264 "name": "spare", 00:12:57.264 "uuid": "a2edf81f-46fe-586d-aa73-5860d7ee56ee", 00:12:57.264 "is_configured": true, 00:12:57.264 "data_offset": 2048, 00:12:57.264 "data_size": 63488 00:12:57.264 }, 00:12:57.264 { 00:12:57.264 "name": "BaseBdev2", 00:12:57.264 "uuid": "d0eb44f9-eb2f-5bef-8b5a-40fd299e71a7", 00:12:57.264 "is_configured": true, 00:12:57.264 "data_offset": 2048, 00:12:57.264 "data_size": 63488 00:12:57.264 } 00:12:57.264 ] 00:12:57.264 }' 00:12:57.264 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.264 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:57.264 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.264 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:57.264 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:57.264 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:57.264 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.264 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:57.264 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:57.264 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:57.264 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.264 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.264 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.264 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.264 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.264 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.264 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.264 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.264 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.264 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.264 "name": "raid_bdev1", 00:12:57.264 "uuid": "83684b4c-d4df-4a2c-9788-622c1cd73243", 00:12:57.264 "strip_size_kb": 0, 00:12:57.264 "state": "online", 00:12:57.264 "raid_level": "raid1", 00:12:57.264 "superblock": true, 00:12:57.264 "num_base_bdevs": 2, 00:12:57.264 "num_base_bdevs_discovered": 2, 00:12:57.264 "num_base_bdevs_operational": 2, 00:12:57.264 "base_bdevs_list": [ 00:12:57.264 { 00:12:57.264 "name": "spare", 00:12:57.264 "uuid": "a2edf81f-46fe-586d-aa73-5860d7ee56ee", 00:12:57.264 "is_configured": true, 00:12:57.264 "data_offset": 2048, 00:12:57.264 "data_size": 63488 00:12:57.264 }, 00:12:57.264 { 00:12:57.264 "name": "BaseBdev2", 00:12:57.264 "uuid": "d0eb44f9-eb2f-5bef-8b5a-40fd299e71a7", 00:12:57.264 "is_configured": true, 00:12:57.264 "data_offset": 2048, 00:12:57.264 "data_size": 63488 00:12:57.264 } 00:12:57.264 ] 00:12:57.264 }' 00:12:57.264 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.264 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:57.834 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.834 09:50:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.834 [2024-12-06 09:50:22.897713] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:57.834 [2024-12-06 09:50:22.897752] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:57.834 00:12:57.834 Latency(us) 00:12:57.834 [2024-12-06T09:50:23.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:57.834 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:57.834 raid_bdev1 : 7.76 102.68 308.04 0.00 0.00 13391.42 314.80 120883.87 00:12:57.834 [2024-12-06T09:50:23.107Z] =================================================================================================================== 00:12:57.834 [2024-12-06T09:50:23.107Z] Total : 102.68 308.04 0.00 0.00 13391.42 314.80 120883.87 00:12:57.834 [2024-12-06 09:50:23.015269] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:57.834 [2024-12-06 09:50:23.015355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.835 [2024-12-06 09:50:23.015434] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:57.835 [2024-12-06 09:50:23.015445] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:57.835 { 00:12:57.835 "results": [ 00:12:57.835 { 00:12:57.835 "job": "raid_bdev1", 00:12:57.835 "core_mask": "0x1", 00:12:57.835 "workload": "randrw", 00:12:57.835 "percentage": 50, 00:12:57.835 "status": "finished", 00:12:57.835 "queue_depth": 2, 00:12:57.835 "io_size": 3145728, 00:12:57.835 "runtime": 7.761881, 00:12:57.835 "iops": 102.68129593844584, 00:12:57.835 "mibps": 308.04388781533754, 00:12:57.835 "io_failed": 0, 00:12:57.835 "io_timeout": 0, 00:12:57.835 "avg_latency_us": 13391.418588264945, 00:12:57.835 "min_latency_us": 314.80174672489085, 00:12:57.835 "max_latency_us": 120883.87074235808 00:12:57.835 } 00:12:57.835 ], 00:12:57.835 "core_count": 1 00:12:57.835 } 00:12:57.835 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.835 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.835 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:57.835 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.835 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.835 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.835 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:57.835 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:57.835 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:57.835 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:57.835 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:57.835 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:57.835 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:57.835 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:57.835 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:57.835 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:57.835 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:57.835 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:57.835 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:58.095 /dev/nbd0 00:12:58.095 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:58.095 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:58.095 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:58.095 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:58.095 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:58.095 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:58.095 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:58.095 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:58.095 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:58.095 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:58.095 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.095 1+0 records in 00:12:58.095 1+0 records out 00:12:58.095 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394042 s, 10.4 MB/s 00:12:58.095 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.095 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:58.095 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.095 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:58.095 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:58.095 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:58.095 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:58.095 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:58.095 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:58.095 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:58.095 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:58.095 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:58.095 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:58.095 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:58.095 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:58.095 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:58.095 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:58.095 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:58.095 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:58.355 /dev/nbd1 00:12:58.355 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:58.355 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:58.355 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:58.355 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:58.355 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:58.355 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:58.355 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:58.355 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:58.355 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:58.355 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:58.355 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.355 1+0 records in 00:12:58.355 1+0 records out 00:12:58.355 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402312 s, 10.2 MB/s 00:12:58.355 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.355 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:58.355 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.355 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:58.355 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:58.355 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:58.355 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:58.355 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:58.615 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:58.615 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:58.615 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:58.615 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:58.615 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:58.615 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.615 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:58.875 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:58.875 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:58.875 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:58.875 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.875 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.875 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:58.875 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:58.875 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.875 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:58.875 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:58.875 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:58.875 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:58.875 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:58.875 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.875 09:50:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:58.875 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:59.135 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:59.135 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:59.135 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.135 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.135 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:59.135 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:59.135 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.135 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:59.135 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:59.135 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.135 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.135 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.135 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:59.135 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.135 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.135 [2024-12-06 09:50:24.172482] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:59.135 [2024-12-06 09:50:24.172533] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.135 [2024-12-06 09:50:24.172556] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:12:59.135 [2024-12-06 09:50:24.172566] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.135 [2024-12-06 09:50:24.174668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.135 [2024-12-06 09:50:24.174701] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:59.135 [2024-12-06 09:50:24.174762] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:59.135 [2024-12-06 09:50:24.174805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:59.135 [2024-12-06 09:50:24.174986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:59.135 spare 00:12:59.135 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.135 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:59.135 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.135 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.135 [2024-12-06 09:50:24.274887] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:59.135 [2024-12-06 09:50:24.274921] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:59.135 [2024-12-06 09:50:24.275200] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:12:59.135 [2024-12-06 09:50:24.275402] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:59.135 [2024-12-06 09:50:24.275421] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:59.135 [2024-12-06 09:50:24.275594] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.135 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.136 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:59.136 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.136 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.136 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.136 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.136 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:59.136 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.136 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.136 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.136 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.136 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.136 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.136 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.136 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.136 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.136 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.136 "name": "raid_bdev1", 00:12:59.136 "uuid": "83684b4c-d4df-4a2c-9788-622c1cd73243", 00:12:59.136 "strip_size_kb": 0, 00:12:59.136 "state": "online", 00:12:59.136 "raid_level": "raid1", 00:12:59.136 "superblock": true, 00:12:59.136 "num_base_bdevs": 2, 00:12:59.136 "num_base_bdevs_discovered": 2, 00:12:59.136 "num_base_bdevs_operational": 2, 00:12:59.136 "base_bdevs_list": [ 00:12:59.136 { 00:12:59.136 "name": "spare", 00:12:59.136 "uuid": "a2edf81f-46fe-586d-aa73-5860d7ee56ee", 00:12:59.136 "is_configured": true, 00:12:59.136 "data_offset": 2048, 00:12:59.136 "data_size": 63488 00:12:59.136 }, 00:12:59.136 { 00:12:59.136 "name": "BaseBdev2", 00:12:59.136 "uuid": "d0eb44f9-eb2f-5bef-8b5a-40fd299e71a7", 00:12:59.136 "is_configured": true, 00:12:59.136 "data_offset": 2048, 00:12:59.136 "data_size": 63488 00:12:59.136 } 00:12:59.136 ] 00:12:59.136 }' 00:12:59.136 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.136 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.704 "name": "raid_bdev1", 00:12:59.704 "uuid": "83684b4c-d4df-4a2c-9788-622c1cd73243", 00:12:59.704 "strip_size_kb": 0, 00:12:59.704 "state": "online", 00:12:59.704 "raid_level": "raid1", 00:12:59.704 "superblock": true, 00:12:59.704 "num_base_bdevs": 2, 00:12:59.704 "num_base_bdevs_discovered": 2, 00:12:59.704 "num_base_bdevs_operational": 2, 00:12:59.704 "base_bdevs_list": [ 00:12:59.704 { 00:12:59.704 "name": "spare", 00:12:59.704 "uuid": "a2edf81f-46fe-586d-aa73-5860d7ee56ee", 00:12:59.704 "is_configured": true, 00:12:59.704 "data_offset": 2048, 00:12:59.704 "data_size": 63488 00:12:59.704 }, 00:12:59.704 { 00:12:59.704 "name": "BaseBdev2", 00:12:59.704 "uuid": "d0eb44f9-eb2f-5bef-8b5a-40fd299e71a7", 00:12:59.704 "is_configured": true, 00:12:59.704 "data_offset": 2048, 00:12:59.704 "data_size": 63488 00:12:59.704 } 00:12:59.704 ] 00:12:59.704 }' 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.704 [2024-12-06 09:50:24.931383] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.704 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.966 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.966 "name": "raid_bdev1", 00:12:59.966 "uuid": "83684b4c-d4df-4a2c-9788-622c1cd73243", 00:12:59.966 "strip_size_kb": 0, 00:12:59.966 "state": "online", 00:12:59.966 "raid_level": "raid1", 00:12:59.966 "superblock": true, 00:12:59.966 "num_base_bdevs": 2, 00:12:59.966 "num_base_bdevs_discovered": 1, 00:12:59.966 "num_base_bdevs_operational": 1, 00:12:59.966 "base_bdevs_list": [ 00:12:59.966 { 00:12:59.966 "name": null, 00:12:59.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.966 "is_configured": false, 00:12:59.966 "data_offset": 0, 00:12:59.966 "data_size": 63488 00:12:59.966 }, 00:12:59.966 { 00:12:59.966 "name": "BaseBdev2", 00:12:59.966 "uuid": "d0eb44f9-eb2f-5bef-8b5a-40fd299e71a7", 00:12:59.966 "is_configured": true, 00:12:59.966 "data_offset": 2048, 00:12:59.966 "data_size": 63488 00:12:59.966 } 00:12:59.966 ] 00:12:59.966 }' 00:12:59.966 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.966 09:50:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.224 09:50:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:00.224 09:50:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.224 09:50:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.224 [2024-12-06 09:50:25.318778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:00.224 [2024-12-06 09:50:25.318980] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:00.224 [2024-12-06 09:50:25.319007] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:00.224 [2024-12-06 09:50:25.319038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:00.224 [2024-12-06 09:50:25.335031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:13:00.224 09:50:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.224 09:50:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:00.224 [2024-12-06 09:50:25.336870] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:01.164 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:01.164 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.164 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:01.164 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:01.164 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.164 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.164 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.164 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.164 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.164 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.164 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.164 "name": "raid_bdev1", 00:13:01.164 "uuid": "83684b4c-d4df-4a2c-9788-622c1cd73243", 00:13:01.164 "strip_size_kb": 0, 00:13:01.164 "state": "online", 00:13:01.164 "raid_level": "raid1", 00:13:01.164 "superblock": true, 00:13:01.164 "num_base_bdevs": 2, 00:13:01.164 "num_base_bdevs_discovered": 2, 00:13:01.164 "num_base_bdevs_operational": 2, 00:13:01.164 "process": { 00:13:01.164 "type": "rebuild", 00:13:01.164 "target": "spare", 00:13:01.164 "progress": { 00:13:01.164 "blocks": 20480, 00:13:01.164 "percent": 32 00:13:01.164 } 00:13:01.164 }, 00:13:01.164 "base_bdevs_list": [ 00:13:01.164 { 00:13:01.164 "name": "spare", 00:13:01.164 "uuid": "a2edf81f-46fe-586d-aa73-5860d7ee56ee", 00:13:01.164 "is_configured": true, 00:13:01.164 "data_offset": 2048, 00:13:01.164 "data_size": 63488 00:13:01.164 }, 00:13:01.164 { 00:13:01.164 "name": "BaseBdev2", 00:13:01.164 "uuid": "d0eb44f9-eb2f-5bef-8b5a-40fd299e71a7", 00:13:01.164 "is_configured": true, 00:13:01.164 "data_offset": 2048, 00:13:01.164 "data_size": 63488 00:13:01.164 } 00:13:01.164 ] 00:13:01.164 }' 00:13:01.164 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.423 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:01.423 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.423 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:01.423 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:01.423 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.423 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.423 [2024-12-06 09:50:26.500543] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:01.423 [2024-12-06 09:50:26.542036] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:01.423 [2024-12-06 09:50:26.542106] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.423 [2024-12-06 09:50:26.542122] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:01.423 [2024-12-06 09:50:26.542131] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:01.423 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.423 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:01.423 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.423 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.423 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.423 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.423 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:01.423 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.423 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.423 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.423 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.423 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.423 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.423 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.423 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.423 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.423 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.423 "name": "raid_bdev1", 00:13:01.423 "uuid": "83684b4c-d4df-4a2c-9788-622c1cd73243", 00:13:01.423 "strip_size_kb": 0, 00:13:01.423 "state": "online", 00:13:01.423 "raid_level": "raid1", 00:13:01.423 "superblock": true, 00:13:01.423 "num_base_bdevs": 2, 00:13:01.423 "num_base_bdevs_discovered": 1, 00:13:01.423 "num_base_bdevs_operational": 1, 00:13:01.423 "base_bdevs_list": [ 00:13:01.423 { 00:13:01.423 "name": null, 00:13:01.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.423 "is_configured": false, 00:13:01.423 "data_offset": 0, 00:13:01.423 "data_size": 63488 00:13:01.423 }, 00:13:01.423 { 00:13:01.423 "name": "BaseBdev2", 00:13:01.423 "uuid": "d0eb44f9-eb2f-5bef-8b5a-40fd299e71a7", 00:13:01.423 "is_configured": true, 00:13:01.423 "data_offset": 2048, 00:13:01.423 "data_size": 63488 00:13:01.423 } 00:13:01.423 ] 00:13:01.423 }' 00:13:01.423 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.423 09:50:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.992 09:50:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:01.992 09:50:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.992 09:50:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.992 [2024-12-06 09:50:27.032651] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:01.992 [2024-12-06 09:50:27.032728] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.992 [2024-12-06 09:50:27.032752] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:01.992 [2024-12-06 09:50:27.032762] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.992 [2024-12-06 09:50:27.033279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.992 [2024-12-06 09:50:27.033314] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:01.992 [2024-12-06 09:50:27.033409] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:01.992 [2024-12-06 09:50:27.033425] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:01.992 [2024-12-06 09:50:27.033436] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:01.992 [2024-12-06 09:50:27.033467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:01.992 [2024-12-06 09:50:27.049623] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:13:01.992 spare 00:13:01.992 09:50:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.992 09:50:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:01.992 [2024-12-06 09:50:27.051527] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:02.932 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.932 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.932 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.932 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.932 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.932 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.932 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.932 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.932 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.932 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.932 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.932 "name": "raid_bdev1", 00:13:02.932 "uuid": "83684b4c-d4df-4a2c-9788-622c1cd73243", 00:13:02.932 "strip_size_kb": 0, 00:13:02.932 "state": "online", 00:13:02.932 "raid_level": "raid1", 00:13:02.932 "superblock": true, 00:13:02.932 "num_base_bdevs": 2, 00:13:02.932 "num_base_bdevs_discovered": 2, 00:13:02.932 "num_base_bdevs_operational": 2, 00:13:02.932 "process": { 00:13:02.932 "type": "rebuild", 00:13:02.932 "target": "spare", 00:13:02.932 "progress": { 00:13:02.932 "blocks": 20480, 00:13:02.932 "percent": 32 00:13:02.932 } 00:13:02.932 }, 00:13:02.932 "base_bdevs_list": [ 00:13:02.932 { 00:13:02.932 "name": "spare", 00:13:02.932 "uuid": "a2edf81f-46fe-586d-aa73-5860d7ee56ee", 00:13:02.932 "is_configured": true, 00:13:02.932 "data_offset": 2048, 00:13:02.932 "data_size": 63488 00:13:02.932 }, 00:13:02.932 { 00:13:02.932 "name": "BaseBdev2", 00:13:02.932 "uuid": "d0eb44f9-eb2f-5bef-8b5a-40fd299e71a7", 00:13:02.932 "is_configured": true, 00:13:02.932 "data_offset": 2048, 00:13:02.932 "data_size": 63488 00:13:02.932 } 00:13:02.932 ] 00:13:02.932 }' 00:13:02.932 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.932 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:02.932 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.192 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:03.192 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:03.192 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.192 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.192 [2024-12-06 09:50:28.212138] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:03.192 [2024-12-06 09:50:28.256558] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:03.192 [2024-12-06 09:50:28.256612] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.192 [2024-12-06 09:50:28.256629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:03.192 [2024-12-06 09:50:28.256636] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:03.192 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.192 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:03.192 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.192 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.192 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.192 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.192 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:03.192 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.192 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.192 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.192 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.192 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.192 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.192 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.192 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.192 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.192 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.192 "name": "raid_bdev1", 00:13:03.192 "uuid": "83684b4c-d4df-4a2c-9788-622c1cd73243", 00:13:03.192 "strip_size_kb": 0, 00:13:03.192 "state": "online", 00:13:03.192 "raid_level": "raid1", 00:13:03.192 "superblock": true, 00:13:03.192 "num_base_bdevs": 2, 00:13:03.192 "num_base_bdevs_discovered": 1, 00:13:03.192 "num_base_bdevs_operational": 1, 00:13:03.192 "base_bdevs_list": [ 00:13:03.192 { 00:13:03.192 "name": null, 00:13:03.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.192 "is_configured": false, 00:13:03.192 "data_offset": 0, 00:13:03.192 "data_size": 63488 00:13:03.192 }, 00:13:03.192 { 00:13:03.192 "name": "BaseBdev2", 00:13:03.192 "uuid": "d0eb44f9-eb2f-5bef-8b5a-40fd299e71a7", 00:13:03.192 "is_configured": true, 00:13:03.192 "data_offset": 2048, 00:13:03.192 "data_size": 63488 00:13:03.192 } 00:13:03.192 ] 00:13:03.192 }' 00:13:03.192 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.192 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.761 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:03.761 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.761 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:03.761 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:03.761 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.761 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.761 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.761 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.761 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.761 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.761 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.761 "name": "raid_bdev1", 00:13:03.761 "uuid": "83684b4c-d4df-4a2c-9788-622c1cd73243", 00:13:03.761 "strip_size_kb": 0, 00:13:03.761 "state": "online", 00:13:03.761 "raid_level": "raid1", 00:13:03.761 "superblock": true, 00:13:03.761 "num_base_bdevs": 2, 00:13:03.761 "num_base_bdevs_discovered": 1, 00:13:03.761 "num_base_bdevs_operational": 1, 00:13:03.761 "base_bdevs_list": [ 00:13:03.761 { 00:13:03.761 "name": null, 00:13:03.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.761 "is_configured": false, 00:13:03.761 "data_offset": 0, 00:13:03.761 "data_size": 63488 00:13:03.761 }, 00:13:03.761 { 00:13:03.761 "name": "BaseBdev2", 00:13:03.761 "uuid": "d0eb44f9-eb2f-5bef-8b5a-40fd299e71a7", 00:13:03.761 "is_configured": true, 00:13:03.761 "data_offset": 2048, 00:13:03.761 "data_size": 63488 00:13:03.761 } 00:13:03.761 ] 00:13:03.761 }' 00:13:03.761 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.761 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:03.761 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.761 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:03.761 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:03.761 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.761 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.761 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.761 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:03.761 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.761 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.761 [2024-12-06 09:50:28.900171] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:03.761 [2024-12-06 09:50:28.900227] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.761 [2024-12-06 09:50:28.900258] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:03.761 [2024-12-06 09:50:28.900268] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.761 [2024-12-06 09:50:28.900694] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.761 [2024-12-06 09:50:28.900711] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:03.761 [2024-12-06 09:50:28.900787] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:03.761 [2024-12-06 09:50:28.900801] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:03.761 [2024-12-06 09:50:28.900811] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:03.761 [2024-12-06 09:50:28.900821] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:03.761 BaseBdev1 00:13:03.761 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.761 09:50:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:04.699 09:50:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:04.699 09:50:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.699 09:50:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.699 09:50:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.699 09:50:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.699 09:50:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:04.699 09:50:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.699 09:50:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.699 09:50:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.699 09:50:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.699 09:50:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.699 09:50:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.699 09:50:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.699 09:50:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.699 09:50:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.699 09:50:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.699 "name": "raid_bdev1", 00:13:04.699 "uuid": "83684b4c-d4df-4a2c-9788-622c1cd73243", 00:13:04.699 "strip_size_kb": 0, 00:13:04.699 "state": "online", 00:13:04.699 "raid_level": "raid1", 00:13:04.699 "superblock": true, 00:13:04.699 "num_base_bdevs": 2, 00:13:04.699 "num_base_bdevs_discovered": 1, 00:13:04.699 "num_base_bdevs_operational": 1, 00:13:04.699 "base_bdevs_list": [ 00:13:04.699 { 00:13:04.699 "name": null, 00:13:04.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.699 "is_configured": false, 00:13:04.699 "data_offset": 0, 00:13:04.699 "data_size": 63488 00:13:04.699 }, 00:13:04.699 { 00:13:04.699 "name": "BaseBdev2", 00:13:04.699 "uuid": "d0eb44f9-eb2f-5bef-8b5a-40fd299e71a7", 00:13:04.699 "is_configured": true, 00:13:04.699 "data_offset": 2048, 00:13:04.699 "data_size": 63488 00:13:04.699 } 00:13:04.699 ] 00:13:04.699 }' 00:13:04.699 09:50:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.699 09:50:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.266 09:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:05.266 09:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.266 09:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:05.266 09:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:05.266 09:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.266 09:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.266 09:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.266 09:50:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.266 09:50:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.266 09:50:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.266 09:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.266 "name": "raid_bdev1", 00:13:05.266 "uuid": "83684b4c-d4df-4a2c-9788-622c1cd73243", 00:13:05.266 "strip_size_kb": 0, 00:13:05.266 "state": "online", 00:13:05.266 "raid_level": "raid1", 00:13:05.266 "superblock": true, 00:13:05.266 "num_base_bdevs": 2, 00:13:05.266 "num_base_bdevs_discovered": 1, 00:13:05.266 "num_base_bdevs_operational": 1, 00:13:05.266 "base_bdevs_list": [ 00:13:05.266 { 00:13:05.266 "name": null, 00:13:05.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.266 "is_configured": false, 00:13:05.266 "data_offset": 0, 00:13:05.266 "data_size": 63488 00:13:05.266 }, 00:13:05.266 { 00:13:05.266 "name": "BaseBdev2", 00:13:05.266 "uuid": "d0eb44f9-eb2f-5bef-8b5a-40fd299e71a7", 00:13:05.266 "is_configured": true, 00:13:05.266 "data_offset": 2048, 00:13:05.266 "data_size": 63488 00:13:05.266 } 00:13:05.266 ] 00:13:05.266 }' 00:13:05.266 09:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.266 09:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:05.266 09:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.266 09:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:05.266 09:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:05.266 09:50:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:13:05.266 09:50:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:05.266 09:50:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:05.266 09:50:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:05.266 09:50:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:05.266 09:50:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:05.266 09:50:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:05.266 09:50:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.266 09:50:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.266 [2024-12-06 09:50:30.521778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:05.266 [2024-12-06 09:50:30.521953] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:05.266 [2024-12-06 09:50:30.521974] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:05.266 request: 00:13:05.266 { 00:13:05.266 "base_bdev": "BaseBdev1", 00:13:05.266 "raid_bdev": "raid_bdev1", 00:13:05.266 "method": "bdev_raid_add_base_bdev", 00:13:05.266 "req_id": 1 00:13:05.266 } 00:13:05.266 Got JSON-RPC error response 00:13:05.266 response: 00:13:05.266 { 00:13:05.266 "code": -22, 00:13:05.266 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:05.266 } 00:13:05.266 09:50:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:05.266 09:50:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:13:05.266 09:50:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:05.266 09:50:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:05.266 09:50:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:05.266 09:50:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:06.646 09:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:06.646 09:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.646 09:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.646 09:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.646 09:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.646 09:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:06.646 09:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.646 09:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.646 09:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.646 09:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.646 09:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.646 09:50:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.646 09:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.646 09:50:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.646 09:50:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.646 09:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.646 "name": "raid_bdev1", 00:13:06.646 "uuid": "83684b4c-d4df-4a2c-9788-622c1cd73243", 00:13:06.646 "strip_size_kb": 0, 00:13:06.646 "state": "online", 00:13:06.646 "raid_level": "raid1", 00:13:06.646 "superblock": true, 00:13:06.646 "num_base_bdevs": 2, 00:13:06.646 "num_base_bdevs_discovered": 1, 00:13:06.646 "num_base_bdevs_operational": 1, 00:13:06.646 "base_bdevs_list": [ 00:13:06.646 { 00:13:06.646 "name": null, 00:13:06.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.646 "is_configured": false, 00:13:06.646 "data_offset": 0, 00:13:06.646 "data_size": 63488 00:13:06.646 }, 00:13:06.646 { 00:13:06.646 "name": "BaseBdev2", 00:13:06.646 "uuid": "d0eb44f9-eb2f-5bef-8b5a-40fd299e71a7", 00:13:06.646 "is_configured": true, 00:13:06.646 "data_offset": 2048, 00:13:06.646 "data_size": 63488 00:13:06.646 } 00:13:06.646 ] 00:13:06.646 }' 00:13:06.646 09:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.646 09:50:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.905 09:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:06.905 09:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.905 09:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:06.905 09:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:06.905 09:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.905 09:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.905 09:50:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.905 09:50:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.905 09:50:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.905 09:50:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.905 09:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.905 "name": "raid_bdev1", 00:13:06.905 "uuid": "83684b4c-d4df-4a2c-9788-622c1cd73243", 00:13:06.905 "strip_size_kb": 0, 00:13:06.905 "state": "online", 00:13:06.905 "raid_level": "raid1", 00:13:06.905 "superblock": true, 00:13:06.905 "num_base_bdevs": 2, 00:13:06.905 "num_base_bdevs_discovered": 1, 00:13:06.905 "num_base_bdevs_operational": 1, 00:13:06.905 "base_bdevs_list": [ 00:13:06.905 { 00:13:06.905 "name": null, 00:13:06.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.905 "is_configured": false, 00:13:06.905 "data_offset": 0, 00:13:06.905 "data_size": 63488 00:13:06.905 }, 00:13:06.905 { 00:13:06.905 "name": "BaseBdev2", 00:13:06.905 "uuid": "d0eb44f9-eb2f-5bef-8b5a-40fd299e71a7", 00:13:06.905 "is_configured": true, 00:13:06.905 "data_offset": 2048, 00:13:06.905 "data_size": 63488 00:13:06.905 } 00:13:06.905 ] 00:13:06.905 }' 00:13:06.905 09:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.905 09:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:06.905 09:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.905 09:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:06.905 09:50:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76750 00:13:06.905 09:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 76750 ']' 00:13:06.906 09:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 76750 00:13:06.906 09:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:13:06.906 09:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:06.906 09:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76750 00:13:06.906 09:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:06.906 09:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:06.906 killing process with pid 76750 00:13:06.906 09:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76750' 00:13:06.906 09:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 76750 00:13:06.906 Received shutdown signal, test time was about 16.963368 seconds 00:13:06.906 00:13:06.906 Latency(us) 00:13:06.906 [2024-12-06T09:50:32.179Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:06.906 [2024-12-06T09:50:32.179Z] =================================================================================================================== 00:13:06.906 [2024-12-06T09:50:32.179Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:07.164 [2024-12-06 09:50:32.177251] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:07.164 [2024-12-06 09:50:32.177404] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:07.164 [2024-12-06 09:50:32.177473] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:07.165 [2024-12-06 09:50:32.177490] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:07.165 09:50:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 76750 00:13:07.165 [2024-12-06 09:50:32.399788] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:08.545 09:50:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:08.545 00:13:08.545 real 0m20.075s 00:13:08.545 user 0m26.292s 00:13:08.545 sys 0m2.166s 00:13:08.545 09:50:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:08.545 09:50:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.545 ************************************ 00:13:08.545 END TEST raid_rebuild_test_sb_io 00:13:08.545 ************************************ 00:13:08.545 09:50:33 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:08.545 09:50:33 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:08.545 09:50:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:08.545 09:50:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.545 09:50:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:08.545 ************************************ 00:13:08.545 START TEST raid_rebuild_test 00:13:08.545 ************************************ 00:13:08.545 09:50:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:13:08.545 09:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:08.545 09:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:08.545 09:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:08.545 09:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:08.545 09:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:08.545 09:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:08.545 09:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:08.545 09:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:08.545 09:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:08.545 09:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:08.545 09:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:08.545 09:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:08.545 09:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:08.545 09:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:08.545 09:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:08.546 09:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:08.546 09:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:08.546 09:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:08.546 09:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:08.546 09:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:08.546 09:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:08.546 09:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:08.546 09:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:08.546 09:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:08.546 09:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:08.546 09:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:08.546 09:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:08.546 09:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:08.546 09:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:08.546 09:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77439 00:13:08.546 09:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:08.546 09:50:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77439 00:13:08.546 09:50:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77439 ']' 00:13:08.546 09:50:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.546 09:50:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:08.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.546 09:50:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.546 09:50:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:08.546 09:50:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.546 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:08.546 Zero copy mechanism will not be used. 00:13:08.546 [2024-12-06 09:50:33.738478] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:13:08.546 [2024-12-06 09:50:33.738593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77439 ] 00:13:08.805 [2024-12-06 09:50:33.911148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.805 [2024-12-06 09:50:34.021633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.065 [2024-12-06 09:50:34.213984] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.065 [2024-12-06 09:50:34.214045] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.325 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:09.325 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:09.325 09:50:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:09.325 09:50:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:09.325 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.325 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.585 BaseBdev1_malloc 00:13:09.585 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.585 09:50:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:09.585 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.585 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.585 [2024-12-06 09:50:34.627279] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:09.585 [2024-12-06 09:50:34.627336] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.585 [2024-12-06 09:50:34.627358] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:09.585 [2024-12-06 09:50:34.627368] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.585 [2024-12-06 09:50:34.629456] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.585 [2024-12-06 09:50:34.629492] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:09.585 BaseBdev1 00:13:09.585 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.585 09:50:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:09.585 09:50:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:09.585 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.585 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.585 BaseBdev2_malloc 00:13:09.585 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.585 09:50:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:09.585 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.586 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.586 [2024-12-06 09:50:34.682359] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:09.586 [2024-12-06 09:50:34.682413] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.586 [2024-12-06 09:50:34.682437] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:09.586 [2024-12-06 09:50:34.682449] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.586 [2024-12-06 09:50:34.684530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.586 [2024-12-06 09:50:34.684566] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:09.586 BaseBdev2 00:13:09.586 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.586 09:50:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:09.586 09:50:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:09.586 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.586 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.586 BaseBdev3_malloc 00:13:09.586 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.586 09:50:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:09.586 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.586 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.586 [2024-12-06 09:50:34.748554] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:09.586 [2024-12-06 09:50:34.748603] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.586 [2024-12-06 09:50:34.748623] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:09.586 [2024-12-06 09:50:34.748634] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.586 [2024-12-06 09:50:34.750634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.586 [2024-12-06 09:50:34.750671] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:09.586 BaseBdev3 00:13:09.586 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.586 09:50:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:09.586 09:50:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:09.586 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.586 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.586 BaseBdev4_malloc 00:13:09.586 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.586 09:50:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:09.586 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.586 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.586 [2024-12-06 09:50:34.803985] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:09.586 [2024-12-06 09:50:34.804037] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.586 [2024-12-06 09:50:34.804056] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:09.586 [2024-12-06 09:50:34.804066] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.586 [2024-12-06 09:50:34.806121] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.586 [2024-12-06 09:50:34.806169] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:09.586 BaseBdev4 00:13:09.586 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.586 09:50:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:09.586 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.586 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.586 spare_malloc 00:13:09.586 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.586 09:50:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:09.586 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.586 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.847 spare_delay 00:13:09.847 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.847 09:50:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:09.847 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.847 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.847 [2024-12-06 09:50:34.869993] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:09.847 [2024-12-06 09:50:34.870040] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.847 [2024-12-06 09:50:34.870056] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:09.847 [2024-12-06 09:50:34.870065] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.847 [2024-12-06 09:50:34.872073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.847 [2024-12-06 09:50:34.872111] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:09.847 spare 00:13:09.847 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.847 09:50:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:09.847 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.847 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.847 [2024-12-06 09:50:34.882016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:09.847 [2024-12-06 09:50:34.883778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:09.847 [2024-12-06 09:50:34.883867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:09.847 [2024-12-06 09:50:34.883921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:09.847 [2024-12-06 09:50:34.883998] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:09.847 [2024-12-06 09:50:34.884011] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:09.847 [2024-12-06 09:50:34.884282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:09.847 [2024-12-06 09:50:34.884478] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:09.847 [2024-12-06 09:50:34.884503] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:09.847 [2024-12-06 09:50:34.884651] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.847 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.847 09:50:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:09.847 09:50:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.847 09:50:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.847 09:50:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.847 09:50:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.847 09:50:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:09.847 09:50:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.847 09:50:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.847 09:50:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.847 09:50:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.847 09:50:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.847 09:50:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.847 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.847 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.847 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.847 09:50:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.847 "name": "raid_bdev1", 00:13:09.847 "uuid": "a5b50789-8350-40fc-b5b6-4fa62e383f17", 00:13:09.847 "strip_size_kb": 0, 00:13:09.847 "state": "online", 00:13:09.847 "raid_level": "raid1", 00:13:09.847 "superblock": false, 00:13:09.847 "num_base_bdevs": 4, 00:13:09.847 "num_base_bdevs_discovered": 4, 00:13:09.847 "num_base_bdevs_operational": 4, 00:13:09.847 "base_bdevs_list": [ 00:13:09.847 { 00:13:09.847 "name": "BaseBdev1", 00:13:09.847 "uuid": "392aceea-5aae-5fcc-951e-da5f2cc71a5f", 00:13:09.847 "is_configured": true, 00:13:09.847 "data_offset": 0, 00:13:09.847 "data_size": 65536 00:13:09.847 }, 00:13:09.847 { 00:13:09.847 "name": "BaseBdev2", 00:13:09.847 "uuid": "981c3ca4-0e5e-528e-8512-215a6433973b", 00:13:09.847 "is_configured": true, 00:13:09.847 "data_offset": 0, 00:13:09.847 "data_size": 65536 00:13:09.847 }, 00:13:09.847 { 00:13:09.847 "name": "BaseBdev3", 00:13:09.847 "uuid": "9351c46c-f429-55cd-8d55-afad8e4d0929", 00:13:09.847 "is_configured": true, 00:13:09.847 "data_offset": 0, 00:13:09.847 "data_size": 65536 00:13:09.847 }, 00:13:09.847 { 00:13:09.847 "name": "BaseBdev4", 00:13:09.847 "uuid": "0faa1b06-99c9-5eac-9cc3-5ed37b46953b", 00:13:09.847 "is_configured": true, 00:13:09.847 "data_offset": 0, 00:13:09.847 "data_size": 65536 00:13:09.847 } 00:13:09.847 ] 00:13:09.847 }' 00:13:09.847 09:50:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.847 09:50:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.105 09:50:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:10.106 09:50:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.106 09:50:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:10.106 09:50:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.106 [2024-12-06 09:50:35.321627] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:10.106 09:50:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.106 09:50:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:10.106 09:50:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.106 09:50:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:10.106 09:50:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.106 09:50:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:10.365 [2024-12-06 09:50:35.576904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:10.365 /dev/nbd0 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:10.365 1+0 records in 00:13:10.365 1+0 records out 00:13:10.365 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198988 s, 20.6 MB/s 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:10.365 09:50:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:15.641 65536+0 records in 00:13:15.641 65536+0 records out 00:13:15.641 33554432 bytes (34 MB, 32 MiB) copied, 5.2716 s, 6.4 MB/s 00:13:15.641 09:50:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:15.641 09:50:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:15.641 09:50:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:15.641 09:50:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:15.641 09:50:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:15.641 09:50:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:15.641 09:50:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:15.902 [2024-12-06 09:50:41.114667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:15.902 09:50:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:15.902 09:50:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:15.902 09:50:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:15.902 09:50:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:15.902 09:50:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:15.902 09:50:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:15.902 09:50:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:15.902 09:50:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:15.902 09:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:15.902 09:50:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.902 09:50:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.902 [2024-12-06 09:50:41.151071] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:15.902 09:50:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.902 09:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:15.902 09:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:15.902 09:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.902 09:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.902 09:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.902 09:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:15.902 09:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.902 09:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.902 09:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.902 09:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.902 09:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.902 09:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.902 09:50:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.902 09:50:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.163 09:50:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.163 09:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.163 "name": "raid_bdev1", 00:13:16.163 "uuid": "a5b50789-8350-40fc-b5b6-4fa62e383f17", 00:13:16.163 "strip_size_kb": 0, 00:13:16.163 "state": "online", 00:13:16.163 "raid_level": "raid1", 00:13:16.163 "superblock": false, 00:13:16.163 "num_base_bdevs": 4, 00:13:16.163 "num_base_bdevs_discovered": 3, 00:13:16.163 "num_base_bdevs_operational": 3, 00:13:16.163 "base_bdevs_list": [ 00:13:16.163 { 00:13:16.163 "name": null, 00:13:16.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.163 "is_configured": false, 00:13:16.164 "data_offset": 0, 00:13:16.164 "data_size": 65536 00:13:16.164 }, 00:13:16.164 { 00:13:16.164 "name": "BaseBdev2", 00:13:16.164 "uuid": "981c3ca4-0e5e-528e-8512-215a6433973b", 00:13:16.164 "is_configured": true, 00:13:16.164 "data_offset": 0, 00:13:16.164 "data_size": 65536 00:13:16.164 }, 00:13:16.164 { 00:13:16.164 "name": "BaseBdev3", 00:13:16.164 "uuid": "9351c46c-f429-55cd-8d55-afad8e4d0929", 00:13:16.164 "is_configured": true, 00:13:16.164 "data_offset": 0, 00:13:16.164 "data_size": 65536 00:13:16.164 }, 00:13:16.164 { 00:13:16.164 "name": "BaseBdev4", 00:13:16.164 "uuid": "0faa1b06-99c9-5eac-9cc3-5ed37b46953b", 00:13:16.164 "is_configured": true, 00:13:16.164 "data_offset": 0, 00:13:16.164 "data_size": 65536 00:13:16.164 } 00:13:16.164 ] 00:13:16.164 }' 00:13:16.164 09:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.164 09:50:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.424 09:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:16.424 09:50:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.424 09:50:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.424 [2024-12-06 09:50:41.602296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:16.424 [2024-12-06 09:50:41.616845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:13:16.424 09:50:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.424 09:50:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:16.424 [2024-12-06 09:50:41.618692] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:17.365 09:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:17.366 09:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:17.366 09:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:17.366 09:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:17.366 09:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:17.366 09:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.366 09:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.366 09:50:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.366 09:50:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.626 09:50:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.626 09:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.626 "name": "raid_bdev1", 00:13:17.626 "uuid": "a5b50789-8350-40fc-b5b6-4fa62e383f17", 00:13:17.626 "strip_size_kb": 0, 00:13:17.626 "state": "online", 00:13:17.626 "raid_level": "raid1", 00:13:17.626 "superblock": false, 00:13:17.626 "num_base_bdevs": 4, 00:13:17.626 "num_base_bdevs_discovered": 4, 00:13:17.626 "num_base_bdevs_operational": 4, 00:13:17.626 "process": { 00:13:17.626 "type": "rebuild", 00:13:17.626 "target": "spare", 00:13:17.626 "progress": { 00:13:17.626 "blocks": 20480, 00:13:17.626 "percent": 31 00:13:17.626 } 00:13:17.626 }, 00:13:17.626 "base_bdevs_list": [ 00:13:17.626 { 00:13:17.626 "name": "spare", 00:13:17.626 "uuid": "f2b8f4be-8296-5e82-a927-c20ba930375f", 00:13:17.626 "is_configured": true, 00:13:17.626 "data_offset": 0, 00:13:17.626 "data_size": 65536 00:13:17.626 }, 00:13:17.626 { 00:13:17.626 "name": "BaseBdev2", 00:13:17.626 "uuid": "981c3ca4-0e5e-528e-8512-215a6433973b", 00:13:17.626 "is_configured": true, 00:13:17.626 "data_offset": 0, 00:13:17.626 "data_size": 65536 00:13:17.626 }, 00:13:17.626 { 00:13:17.626 "name": "BaseBdev3", 00:13:17.626 "uuid": "9351c46c-f429-55cd-8d55-afad8e4d0929", 00:13:17.626 "is_configured": true, 00:13:17.626 "data_offset": 0, 00:13:17.626 "data_size": 65536 00:13:17.626 }, 00:13:17.626 { 00:13:17.626 "name": "BaseBdev4", 00:13:17.626 "uuid": "0faa1b06-99c9-5eac-9cc3-5ed37b46953b", 00:13:17.626 "is_configured": true, 00:13:17.626 "data_offset": 0, 00:13:17.626 "data_size": 65536 00:13:17.626 } 00:13:17.626 ] 00:13:17.626 }' 00:13:17.626 09:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:17.626 09:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:17.626 09:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:17.626 09:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:17.626 09:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:17.626 09:50:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.626 09:50:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.626 [2024-12-06 09:50:42.774131] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:17.626 [2024-12-06 09:50:42.823718] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:17.626 [2024-12-06 09:50:42.823777] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.626 [2024-12-06 09:50:42.823793] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:17.626 [2024-12-06 09:50:42.823802] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:17.626 09:50:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.626 09:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:17.626 09:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.626 09:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.626 09:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.626 09:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.626 09:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:17.626 09:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.626 09:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.626 09:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.626 09:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.626 09:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.626 09:50:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.626 09:50:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.626 09:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.626 09:50:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.886 09:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.886 "name": "raid_bdev1", 00:13:17.886 "uuid": "a5b50789-8350-40fc-b5b6-4fa62e383f17", 00:13:17.886 "strip_size_kb": 0, 00:13:17.886 "state": "online", 00:13:17.886 "raid_level": "raid1", 00:13:17.886 "superblock": false, 00:13:17.886 "num_base_bdevs": 4, 00:13:17.886 "num_base_bdevs_discovered": 3, 00:13:17.886 "num_base_bdevs_operational": 3, 00:13:17.886 "base_bdevs_list": [ 00:13:17.886 { 00:13:17.886 "name": null, 00:13:17.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.886 "is_configured": false, 00:13:17.886 "data_offset": 0, 00:13:17.886 "data_size": 65536 00:13:17.886 }, 00:13:17.886 { 00:13:17.886 "name": "BaseBdev2", 00:13:17.886 "uuid": "981c3ca4-0e5e-528e-8512-215a6433973b", 00:13:17.886 "is_configured": true, 00:13:17.886 "data_offset": 0, 00:13:17.886 "data_size": 65536 00:13:17.886 }, 00:13:17.886 { 00:13:17.886 "name": "BaseBdev3", 00:13:17.886 "uuid": "9351c46c-f429-55cd-8d55-afad8e4d0929", 00:13:17.886 "is_configured": true, 00:13:17.886 "data_offset": 0, 00:13:17.886 "data_size": 65536 00:13:17.886 }, 00:13:17.886 { 00:13:17.886 "name": "BaseBdev4", 00:13:17.886 "uuid": "0faa1b06-99c9-5eac-9cc3-5ed37b46953b", 00:13:17.886 "is_configured": true, 00:13:17.886 "data_offset": 0, 00:13:17.886 "data_size": 65536 00:13:17.886 } 00:13:17.886 ] 00:13:17.886 }' 00:13:17.886 09:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.886 09:50:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.146 09:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:18.146 09:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.146 09:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:18.146 09:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:18.146 09:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.146 09:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.146 09:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.146 09:50:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.146 09:50:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.146 09:50:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.146 09:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.146 "name": "raid_bdev1", 00:13:18.146 "uuid": "a5b50789-8350-40fc-b5b6-4fa62e383f17", 00:13:18.146 "strip_size_kb": 0, 00:13:18.146 "state": "online", 00:13:18.146 "raid_level": "raid1", 00:13:18.146 "superblock": false, 00:13:18.146 "num_base_bdevs": 4, 00:13:18.146 "num_base_bdevs_discovered": 3, 00:13:18.146 "num_base_bdevs_operational": 3, 00:13:18.146 "base_bdevs_list": [ 00:13:18.146 { 00:13:18.146 "name": null, 00:13:18.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.146 "is_configured": false, 00:13:18.146 "data_offset": 0, 00:13:18.146 "data_size": 65536 00:13:18.146 }, 00:13:18.146 { 00:13:18.146 "name": "BaseBdev2", 00:13:18.146 "uuid": "981c3ca4-0e5e-528e-8512-215a6433973b", 00:13:18.146 "is_configured": true, 00:13:18.146 "data_offset": 0, 00:13:18.146 "data_size": 65536 00:13:18.146 }, 00:13:18.146 { 00:13:18.146 "name": "BaseBdev3", 00:13:18.146 "uuid": "9351c46c-f429-55cd-8d55-afad8e4d0929", 00:13:18.146 "is_configured": true, 00:13:18.146 "data_offset": 0, 00:13:18.146 "data_size": 65536 00:13:18.146 }, 00:13:18.146 { 00:13:18.146 "name": "BaseBdev4", 00:13:18.146 "uuid": "0faa1b06-99c9-5eac-9cc3-5ed37b46953b", 00:13:18.146 "is_configured": true, 00:13:18.146 "data_offset": 0, 00:13:18.146 "data_size": 65536 00:13:18.146 } 00:13:18.146 ] 00:13:18.146 }' 00:13:18.146 09:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.146 09:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:18.146 09:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.407 09:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:18.407 09:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:18.407 09:50:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.407 09:50:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.407 [2024-12-06 09:50:43.436171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:18.407 [2024-12-06 09:50:43.450122] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:13:18.407 09:50:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.407 09:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:18.407 [2024-12-06 09:50:43.451964] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:19.348 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.348 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.348 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.348 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.348 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.348 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.348 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.348 09:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.348 09:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.348 09:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.348 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.348 "name": "raid_bdev1", 00:13:19.348 "uuid": "a5b50789-8350-40fc-b5b6-4fa62e383f17", 00:13:19.348 "strip_size_kb": 0, 00:13:19.348 "state": "online", 00:13:19.348 "raid_level": "raid1", 00:13:19.348 "superblock": false, 00:13:19.348 "num_base_bdevs": 4, 00:13:19.348 "num_base_bdevs_discovered": 4, 00:13:19.348 "num_base_bdevs_operational": 4, 00:13:19.348 "process": { 00:13:19.348 "type": "rebuild", 00:13:19.348 "target": "spare", 00:13:19.348 "progress": { 00:13:19.348 "blocks": 20480, 00:13:19.348 "percent": 31 00:13:19.348 } 00:13:19.348 }, 00:13:19.348 "base_bdevs_list": [ 00:13:19.348 { 00:13:19.348 "name": "spare", 00:13:19.348 "uuid": "f2b8f4be-8296-5e82-a927-c20ba930375f", 00:13:19.348 "is_configured": true, 00:13:19.348 "data_offset": 0, 00:13:19.348 "data_size": 65536 00:13:19.348 }, 00:13:19.348 { 00:13:19.348 "name": "BaseBdev2", 00:13:19.348 "uuid": "981c3ca4-0e5e-528e-8512-215a6433973b", 00:13:19.348 "is_configured": true, 00:13:19.348 "data_offset": 0, 00:13:19.348 "data_size": 65536 00:13:19.348 }, 00:13:19.348 { 00:13:19.348 "name": "BaseBdev3", 00:13:19.348 "uuid": "9351c46c-f429-55cd-8d55-afad8e4d0929", 00:13:19.348 "is_configured": true, 00:13:19.348 "data_offset": 0, 00:13:19.348 "data_size": 65536 00:13:19.348 }, 00:13:19.348 { 00:13:19.348 "name": "BaseBdev4", 00:13:19.348 "uuid": "0faa1b06-99c9-5eac-9cc3-5ed37b46953b", 00:13:19.348 "is_configured": true, 00:13:19.348 "data_offset": 0, 00:13:19.348 "data_size": 65536 00:13:19.348 } 00:13:19.348 ] 00:13:19.348 }' 00:13:19.348 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.348 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.348 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.348 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.348 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:19.348 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:19.348 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:19.348 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:19.348 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:19.348 09:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.348 09:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.348 [2024-12-06 09:50:44.607392] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:19.610 [2024-12-06 09:50:44.656986] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:13:19.610 09:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.610 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:19.610 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:19.610 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.610 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.610 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.610 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.610 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.610 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.610 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.610 09:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.610 09:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.610 09:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.610 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.610 "name": "raid_bdev1", 00:13:19.610 "uuid": "a5b50789-8350-40fc-b5b6-4fa62e383f17", 00:13:19.610 "strip_size_kb": 0, 00:13:19.610 "state": "online", 00:13:19.610 "raid_level": "raid1", 00:13:19.610 "superblock": false, 00:13:19.610 "num_base_bdevs": 4, 00:13:19.610 "num_base_bdevs_discovered": 3, 00:13:19.610 "num_base_bdevs_operational": 3, 00:13:19.610 "process": { 00:13:19.610 "type": "rebuild", 00:13:19.610 "target": "spare", 00:13:19.610 "progress": { 00:13:19.610 "blocks": 24576, 00:13:19.610 "percent": 37 00:13:19.610 } 00:13:19.610 }, 00:13:19.610 "base_bdevs_list": [ 00:13:19.610 { 00:13:19.610 "name": "spare", 00:13:19.610 "uuid": "f2b8f4be-8296-5e82-a927-c20ba930375f", 00:13:19.610 "is_configured": true, 00:13:19.610 "data_offset": 0, 00:13:19.610 "data_size": 65536 00:13:19.610 }, 00:13:19.610 { 00:13:19.610 "name": null, 00:13:19.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.610 "is_configured": false, 00:13:19.610 "data_offset": 0, 00:13:19.610 "data_size": 65536 00:13:19.610 }, 00:13:19.610 { 00:13:19.610 "name": "BaseBdev3", 00:13:19.610 "uuid": "9351c46c-f429-55cd-8d55-afad8e4d0929", 00:13:19.610 "is_configured": true, 00:13:19.610 "data_offset": 0, 00:13:19.610 "data_size": 65536 00:13:19.610 }, 00:13:19.610 { 00:13:19.610 "name": "BaseBdev4", 00:13:19.610 "uuid": "0faa1b06-99c9-5eac-9cc3-5ed37b46953b", 00:13:19.610 "is_configured": true, 00:13:19.610 "data_offset": 0, 00:13:19.610 "data_size": 65536 00:13:19.610 } 00:13:19.610 ] 00:13:19.610 }' 00:13:19.610 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.610 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.610 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.610 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.610 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=438 00:13:19.610 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:19.610 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.610 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.610 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.610 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.610 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.610 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.610 09:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.610 09:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.610 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.610 09:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.610 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.610 "name": "raid_bdev1", 00:13:19.610 "uuid": "a5b50789-8350-40fc-b5b6-4fa62e383f17", 00:13:19.610 "strip_size_kb": 0, 00:13:19.610 "state": "online", 00:13:19.610 "raid_level": "raid1", 00:13:19.610 "superblock": false, 00:13:19.610 "num_base_bdevs": 4, 00:13:19.610 "num_base_bdevs_discovered": 3, 00:13:19.610 "num_base_bdevs_operational": 3, 00:13:19.610 "process": { 00:13:19.610 "type": "rebuild", 00:13:19.610 "target": "spare", 00:13:19.610 "progress": { 00:13:19.610 "blocks": 26624, 00:13:19.610 "percent": 40 00:13:19.610 } 00:13:19.610 }, 00:13:19.610 "base_bdevs_list": [ 00:13:19.610 { 00:13:19.610 "name": "spare", 00:13:19.610 "uuid": "f2b8f4be-8296-5e82-a927-c20ba930375f", 00:13:19.610 "is_configured": true, 00:13:19.610 "data_offset": 0, 00:13:19.610 "data_size": 65536 00:13:19.610 }, 00:13:19.610 { 00:13:19.610 "name": null, 00:13:19.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.610 "is_configured": false, 00:13:19.610 "data_offset": 0, 00:13:19.610 "data_size": 65536 00:13:19.610 }, 00:13:19.610 { 00:13:19.610 "name": "BaseBdev3", 00:13:19.610 "uuid": "9351c46c-f429-55cd-8d55-afad8e4d0929", 00:13:19.610 "is_configured": true, 00:13:19.610 "data_offset": 0, 00:13:19.610 "data_size": 65536 00:13:19.610 }, 00:13:19.610 { 00:13:19.610 "name": "BaseBdev4", 00:13:19.610 "uuid": "0faa1b06-99c9-5eac-9cc3-5ed37b46953b", 00:13:19.610 "is_configured": true, 00:13:19.610 "data_offset": 0, 00:13:19.610 "data_size": 65536 00:13:19.610 } 00:13:19.610 ] 00:13:19.610 }' 00:13:19.610 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.871 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.871 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.871 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.871 09:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:20.810 09:50:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:20.810 09:50:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:20.810 09:50:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.810 09:50:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:20.810 09:50:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:20.810 09:50:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.810 09:50:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.810 09:50:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.810 09:50:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.810 09:50:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.810 09:50:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.810 09:50:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.810 "name": "raid_bdev1", 00:13:20.810 "uuid": "a5b50789-8350-40fc-b5b6-4fa62e383f17", 00:13:20.810 "strip_size_kb": 0, 00:13:20.810 "state": "online", 00:13:20.810 "raid_level": "raid1", 00:13:20.810 "superblock": false, 00:13:20.810 "num_base_bdevs": 4, 00:13:20.810 "num_base_bdevs_discovered": 3, 00:13:20.810 "num_base_bdevs_operational": 3, 00:13:20.810 "process": { 00:13:20.810 "type": "rebuild", 00:13:20.810 "target": "spare", 00:13:20.810 "progress": { 00:13:20.810 "blocks": 49152, 00:13:20.810 "percent": 75 00:13:20.810 } 00:13:20.810 }, 00:13:20.810 "base_bdevs_list": [ 00:13:20.810 { 00:13:20.810 "name": "spare", 00:13:20.810 "uuid": "f2b8f4be-8296-5e82-a927-c20ba930375f", 00:13:20.810 "is_configured": true, 00:13:20.810 "data_offset": 0, 00:13:20.810 "data_size": 65536 00:13:20.810 }, 00:13:20.810 { 00:13:20.810 "name": null, 00:13:20.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.810 "is_configured": false, 00:13:20.810 "data_offset": 0, 00:13:20.810 "data_size": 65536 00:13:20.810 }, 00:13:20.810 { 00:13:20.810 "name": "BaseBdev3", 00:13:20.810 "uuid": "9351c46c-f429-55cd-8d55-afad8e4d0929", 00:13:20.810 "is_configured": true, 00:13:20.810 "data_offset": 0, 00:13:20.810 "data_size": 65536 00:13:20.810 }, 00:13:20.810 { 00:13:20.810 "name": "BaseBdev4", 00:13:20.810 "uuid": "0faa1b06-99c9-5eac-9cc3-5ed37b46953b", 00:13:20.810 "is_configured": true, 00:13:20.810 "data_offset": 0, 00:13:20.810 "data_size": 65536 00:13:20.810 } 00:13:20.810 ] 00:13:20.810 }' 00:13:20.810 09:50:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.810 09:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:20.810 09:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.810 09:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:20.810 09:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:21.750 [2024-12-06 09:50:46.665681] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:21.750 [2024-12-06 09:50:46.665766] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:21.750 [2024-12-06 09:50:46.665810] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.008 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:22.008 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:22.008 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.008 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:22.008 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:22.008 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.008 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.008 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.008 09:50:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.008 09:50:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.008 09:50:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.008 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.008 "name": "raid_bdev1", 00:13:22.008 "uuid": "a5b50789-8350-40fc-b5b6-4fa62e383f17", 00:13:22.008 "strip_size_kb": 0, 00:13:22.008 "state": "online", 00:13:22.008 "raid_level": "raid1", 00:13:22.008 "superblock": false, 00:13:22.008 "num_base_bdevs": 4, 00:13:22.008 "num_base_bdevs_discovered": 3, 00:13:22.008 "num_base_bdevs_operational": 3, 00:13:22.008 "base_bdevs_list": [ 00:13:22.008 { 00:13:22.008 "name": "spare", 00:13:22.008 "uuid": "f2b8f4be-8296-5e82-a927-c20ba930375f", 00:13:22.008 "is_configured": true, 00:13:22.008 "data_offset": 0, 00:13:22.008 "data_size": 65536 00:13:22.008 }, 00:13:22.009 { 00:13:22.009 "name": null, 00:13:22.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.009 "is_configured": false, 00:13:22.009 "data_offset": 0, 00:13:22.009 "data_size": 65536 00:13:22.009 }, 00:13:22.009 { 00:13:22.009 "name": "BaseBdev3", 00:13:22.009 "uuid": "9351c46c-f429-55cd-8d55-afad8e4d0929", 00:13:22.009 "is_configured": true, 00:13:22.009 "data_offset": 0, 00:13:22.009 "data_size": 65536 00:13:22.009 }, 00:13:22.009 { 00:13:22.009 "name": "BaseBdev4", 00:13:22.009 "uuid": "0faa1b06-99c9-5eac-9cc3-5ed37b46953b", 00:13:22.009 "is_configured": true, 00:13:22.009 "data_offset": 0, 00:13:22.009 "data_size": 65536 00:13:22.009 } 00:13:22.009 ] 00:13:22.009 }' 00:13:22.009 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.009 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:22.009 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.009 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:22.009 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:22.009 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:22.009 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.009 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:22.009 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:22.009 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.009 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.009 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.009 09:50:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.009 09:50:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.009 09:50:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.009 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.009 "name": "raid_bdev1", 00:13:22.009 "uuid": "a5b50789-8350-40fc-b5b6-4fa62e383f17", 00:13:22.009 "strip_size_kb": 0, 00:13:22.009 "state": "online", 00:13:22.009 "raid_level": "raid1", 00:13:22.009 "superblock": false, 00:13:22.009 "num_base_bdevs": 4, 00:13:22.009 "num_base_bdevs_discovered": 3, 00:13:22.009 "num_base_bdevs_operational": 3, 00:13:22.009 "base_bdevs_list": [ 00:13:22.009 { 00:13:22.009 "name": "spare", 00:13:22.009 "uuid": "f2b8f4be-8296-5e82-a927-c20ba930375f", 00:13:22.009 "is_configured": true, 00:13:22.009 "data_offset": 0, 00:13:22.009 "data_size": 65536 00:13:22.009 }, 00:13:22.009 { 00:13:22.009 "name": null, 00:13:22.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.009 "is_configured": false, 00:13:22.009 "data_offset": 0, 00:13:22.009 "data_size": 65536 00:13:22.009 }, 00:13:22.009 { 00:13:22.009 "name": "BaseBdev3", 00:13:22.009 "uuid": "9351c46c-f429-55cd-8d55-afad8e4d0929", 00:13:22.009 "is_configured": true, 00:13:22.009 "data_offset": 0, 00:13:22.009 "data_size": 65536 00:13:22.009 }, 00:13:22.009 { 00:13:22.009 "name": "BaseBdev4", 00:13:22.009 "uuid": "0faa1b06-99c9-5eac-9cc3-5ed37b46953b", 00:13:22.009 "is_configured": true, 00:13:22.009 "data_offset": 0, 00:13:22.009 "data_size": 65536 00:13:22.009 } 00:13:22.009 ] 00:13:22.009 }' 00:13:22.009 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.269 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:22.269 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.269 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:22.269 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:22.269 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.269 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.269 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.269 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.269 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:22.269 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.269 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.269 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.269 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.269 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.269 09:50:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.269 09:50:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.269 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.269 09:50:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.269 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.269 "name": "raid_bdev1", 00:13:22.269 "uuid": "a5b50789-8350-40fc-b5b6-4fa62e383f17", 00:13:22.269 "strip_size_kb": 0, 00:13:22.269 "state": "online", 00:13:22.269 "raid_level": "raid1", 00:13:22.269 "superblock": false, 00:13:22.269 "num_base_bdevs": 4, 00:13:22.269 "num_base_bdevs_discovered": 3, 00:13:22.269 "num_base_bdevs_operational": 3, 00:13:22.269 "base_bdevs_list": [ 00:13:22.269 { 00:13:22.269 "name": "spare", 00:13:22.269 "uuid": "f2b8f4be-8296-5e82-a927-c20ba930375f", 00:13:22.269 "is_configured": true, 00:13:22.269 "data_offset": 0, 00:13:22.269 "data_size": 65536 00:13:22.269 }, 00:13:22.269 { 00:13:22.269 "name": null, 00:13:22.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.269 "is_configured": false, 00:13:22.269 "data_offset": 0, 00:13:22.269 "data_size": 65536 00:13:22.269 }, 00:13:22.269 { 00:13:22.269 "name": "BaseBdev3", 00:13:22.269 "uuid": "9351c46c-f429-55cd-8d55-afad8e4d0929", 00:13:22.269 "is_configured": true, 00:13:22.269 "data_offset": 0, 00:13:22.269 "data_size": 65536 00:13:22.269 }, 00:13:22.269 { 00:13:22.269 "name": "BaseBdev4", 00:13:22.269 "uuid": "0faa1b06-99c9-5eac-9cc3-5ed37b46953b", 00:13:22.269 "is_configured": true, 00:13:22.269 "data_offset": 0, 00:13:22.269 "data_size": 65536 00:13:22.269 } 00:13:22.269 ] 00:13:22.269 }' 00:13:22.269 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.269 09:50:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.529 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:22.529 09:50:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.529 09:50:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.789 [2024-12-06 09:50:47.805423] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:22.789 [2024-12-06 09:50:47.805456] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:22.789 [2024-12-06 09:50:47.805543] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:22.789 [2024-12-06 09:50:47.805621] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:22.789 [2024-12-06 09:50:47.805631] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:22.789 09:50:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.789 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.789 09:50:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.789 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:22.789 09:50:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.789 09:50:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.789 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:22.789 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:22.789 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:22.789 09:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:22.789 09:50:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:22.789 09:50:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:22.789 09:50:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:22.789 09:50:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:22.789 09:50:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:22.789 09:50:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:22.789 09:50:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:22.789 09:50:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:22.789 09:50:47 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:22.789 /dev/nbd0 00:13:23.049 09:50:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:23.049 09:50:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:23.049 09:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:23.049 09:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:23.049 09:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:23.049 09:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:23.049 09:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:23.049 09:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:23.049 09:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:23.049 09:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:23.049 09:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.049 1+0 records in 00:13:23.049 1+0 records out 00:13:23.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233411 s, 17.5 MB/s 00:13:23.049 09:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.049 09:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:23.049 09:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.049 09:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:23.049 09:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:23.049 09:50:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:23.049 09:50:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:23.049 09:50:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:23.049 /dev/nbd1 00:13:23.049 09:50:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:23.049 09:50:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:23.049 09:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:23.049 09:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:23.049 09:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:23.049 09:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:23.049 09:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:23.309 09:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:23.309 09:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:23.309 09:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:23.309 09:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.309 1+0 records in 00:13:23.309 1+0 records out 00:13:23.309 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338294 s, 12.1 MB/s 00:13:23.309 09:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.309 09:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:23.309 09:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.309 09:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:23.309 09:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:23.309 09:50:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:23.309 09:50:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:23.309 09:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:23.309 09:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:23.309 09:50:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:23.309 09:50:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:23.309 09:50:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:23.309 09:50:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:23.309 09:50:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:23.309 09:50:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:23.570 09:50:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:23.570 09:50:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:23.570 09:50:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:23.570 09:50:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:23.570 09:50:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:23.570 09:50:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:23.570 09:50:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:23.570 09:50:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:23.570 09:50:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:23.570 09:50:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:23.829 09:50:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:23.829 09:50:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:23.829 09:50:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:23.829 09:50:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:23.829 09:50:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:23.829 09:50:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:23.829 09:50:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:23.829 09:50:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:23.829 09:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:23.829 09:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77439 00:13:23.829 09:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77439 ']' 00:13:23.829 09:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77439 00:13:23.829 09:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:23.829 09:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:23.829 09:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77439 00:13:23.829 killing process with pid 77439 00:13:23.829 Received shutdown signal, test time was about 60.000000 seconds 00:13:23.829 00:13:23.829 Latency(us) 00:13:23.829 [2024-12-06T09:50:49.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:23.829 [2024-12-06T09:50:49.102Z] =================================================================================================================== 00:13:23.829 [2024-12-06T09:50:49.102Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:23.829 09:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:23.829 09:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:23.829 09:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77439' 00:13:23.829 09:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77439 00:13:23.829 [2024-12-06 09:50:49.004156] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:23.829 09:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77439 00:13:24.400 [2024-12-06 09:50:49.477169] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:25.340 09:50:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:25.340 00:13:25.340 real 0m16.941s 00:13:25.340 user 0m19.175s 00:13:25.340 sys 0m2.904s 00:13:25.340 09:50:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:25.340 ************************************ 00:13:25.340 END TEST raid_rebuild_test 00:13:25.340 ************************************ 00:13:25.340 09:50:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.600 09:50:50 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:13:25.600 09:50:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:25.600 09:50:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:25.600 09:50:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:25.600 ************************************ 00:13:25.600 START TEST raid_rebuild_test_sb 00:13:25.600 ************************************ 00:13:25.600 09:50:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:13:25.600 09:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=77879 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 77879 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77879 ']' 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:25.601 09:50:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.601 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:25.601 Zero copy mechanism will not be used. 00:13:25.601 [2024-12-06 09:50:50.750798] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:13:25.601 [2024-12-06 09:50:50.750929] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77879 ] 00:13:25.862 [2024-12-06 09:50:50.904668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.862 [2024-12-06 09:50:51.016088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.129 [2024-12-06 09:50:51.219142] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:26.129 [2024-12-06 09:50:51.219207] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:26.389 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:26.389 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:26.389 09:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:26.389 09:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:26.389 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.389 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.389 BaseBdev1_malloc 00:13:26.389 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.389 09:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:26.389 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.389 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.389 [2024-12-06 09:50:51.623537] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:26.389 [2024-12-06 09:50:51.623602] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.389 [2024-12-06 09:50:51.623624] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:26.389 [2024-12-06 09:50:51.623636] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.389 [2024-12-06 09:50:51.625861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.389 [2024-12-06 09:50:51.625947] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:26.389 BaseBdev1 00:13:26.389 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.389 09:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:26.389 09:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:26.389 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.389 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.648 BaseBdev2_malloc 00:13:26.648 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.648 09:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:26.648 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.649 [2024-12-06 09:50:51.678668] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:26.649 [2024-12-06 09:50:51.678729] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.649 [2024-12-06 09:50:51.678750] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:26.649 [2024-12-06 09:50:51.678761] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.649 [2024-12-06 09:50:51.680818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.649 [2024-12-06 09:50:51.680901] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:26.649 BaseBdev2 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.649 BaseBdev3_malloc 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.649 [2024-12-06 09:50:51.755391] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:26.649 [2024-12-06 09:50:51.755444] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.649 [2024-12-06 09:50:51.755466] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:26.649 [2024-12-06 09:50:51.755477] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.649 [2024-12-06 09:50:51.757624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.649 [2024-12-06 09:50:51.757664] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:26.649 BaseBdev3 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.649 BaseBdev4_malloc 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.649 [2024-12-06 09:50:51.809791] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:26.649 [2024-12-06 09:50:51.809848] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.649 [2024-12-06 09:50:51.809867] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:26.649 [2024-12-06 09:50:51.809878] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.649 [2024-12-06 09:50:51.811936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.649 [2024-12-06 09:50:51.812022] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:26.649 BaseBdev4 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.649 spare_malloc 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.649 spare_delay 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.649 [2024-12-06 09:50:51.877949] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:26.649 [2024-12-06 09:50:51.878001] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.649 [2024-12-06 09:50:51.878016] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:26.649 [2024-12-06 09:50:51.878026] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.649 [2024-12-06 09:50:51.880093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.649 [2024-12-06 09:50:51.880133] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:26.649 spare 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.649 [2024-12-06 09:50:51.889978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:26.649 [2024-12-06 09:50:51.891754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:26.649 [2024-12-06 09:50:51.891826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:26.649 [2024-12-06 09:50:51.891877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:26.649 [2024-12-06 09:50:51.892050] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:26.649 [2024-12-06 09:50:51.892066] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:26.649 [2024-12-06 09:50:51.892362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:26.649 [2024-12-06 09:50:51.892541] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:26.649 [2024-12-06 09:50:51.892553] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:26.649 [2024-12-06 09:50:51.892711] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.649 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.909 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.909 09:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.909 "name": "raid_bdev1", 00:13:26.909 "uuid": "fcbbe2b6-cc7e-47f0-a4f5-2f8ea4f9d816", 00:13:26.909 "strip_size_kb": 0, 00:13:26.909 "state": "online", 00:13:26.909 "raid_level": "raid1", 00:13:26.909 "superblock": true, 00:13:26.909 "num_base_bdevs": 4, 00:13:26.909 "num_base_bdevs_discovered": 4, 00:13:26.909 "num_base_bdevs_operational": 4, 00:13:26.909 "base_bdevs_list": [ 00:13:26.909 { 00:13:26.909 "name": "BaseBdev1", 00:13:26.909 "uuid": "5260ebba-6a0e-5d21-abea-94147bddd768", 00:13:26.909 "is_configured": true, 00:13:26.909 "data_offset": 2048, 00:13:26.909 "data_size": 63488 00:13:26.909 }, 00:13:26.909 { 00:13:26.909 "name": "BaseBdev2", 00:13:26.909 "uuid": "c83b84af-9868-54b6-869d-438c142cdb29", 00:13:26.909 "is_configured": true, 00:13:26.909 "data_offset": 2048, 00:13:26.909 "data_size": 63488 00:13:26.909 }, 00:13:26.909 { 00:13:26.909 "name": "BaseBdev3", 00:13:26.909 "uuid": "82d885b0-7693-5228-bd86-effddc425f44", 00:13:26.909 "is_configured": true, 00:13:26.909 "data_offset": 2048, 00:13:26.909 "data_size": 63488 00:13:26.909 }, 00:13:26.910 { 00:13:26.910 "name": "BaseBdev4", 00:13:26.910 "uuid": "b9cad4e2-3e59-550b-97a7-2b90751cc18a", 00:13:26.910 "is_configured": true, 00:13:26.910 "data_offset": 2048, 00:13:26.910 "data_size": 63488 00:13:26.910 } 00:13:26.910 ] 00:13:26.910 }' 00:13:26.910 09:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.910 09:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.169 09:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:27.169 09:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:27.169 09:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.169 09:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.169 [2024-12-06 09:50:52.317589] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:27.169 09:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.169 09:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:27.169 09:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.169 09:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:27.169 09:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.169 09:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.169 09:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.169 09:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:27.169 09:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:27.169 09:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:27.169 09:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:27.169 09:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:27.169 09:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:27.169 09:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:27.169 09:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:27.169 09:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:27.169 09:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:27.169 09:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:27.169 09:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:27.169 09:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:27.169 09:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:27.551 [2024-12-06 09:50:52.568873] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:27.551 /dev/nbd0 00:13:27.551 09:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:27.551 09:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:27.551 09:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:27.551 09:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:27.551 09:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:27.551 09:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:27.551 09:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:27.551 09:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:27.551 09:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:27.551 09:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:27.551 09:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:27.551 1+0 records in 00:13:27.551 1+0 records out 00:13:27.551 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330815 s, 12.4 MB/s 00:13:27.551 09:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.551 09:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:27.551 09:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.551 09:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:27.551 09:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:27.551 09:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:27.551 09:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:27.551 09:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:27.551 09:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:27.551 09:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:32.844 63488+0 records in 00:13:32.844 63488+0 records out 00:13:32.844 32505856 bytes (33 MB, 31 MiB) copied, 5.09705 s, 6.4 MB/s 00:13:32.844 09:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:32.844 09:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:32.844 09:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:32.844 09:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:32.844 09:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:32.844 09:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:32.844 09:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:32.844 09:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:32.844 [2024-12-06 09:50:57.967300] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.844 09:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:32.844 09:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:32.844 09:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:32.844 09:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:32.844 09:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:32.844 09:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:32.844 09:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:32.844 09:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:32.844 09:50:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.844 09:50:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.844 [2024-12-06 09:50:57.982960] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:32.844 09:50:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.844 09:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:32.844 09:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.844 09:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.844 09:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.844 09:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.844 09:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:32.844 09:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.844 09:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.844 09:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.844 09:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.844 09:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.844 09:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.844 09:50:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.844 09:50:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.844 09:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.844 09:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.844 "name": "raid_bdev1", 00:13:32.844 "uuid": "fcbbe2b6-cc7e-47f0-a4f5-2f8ea4f9d816", 00:13:32.844 "strip_size_kb": 0, 00:13:32.844 "state": "online", 00:13:32.844 "raid_level": "raid1", 00:13:32.844 "superblock": true, 00:13:32.844 "num_base_bdevs": 4, 00:13:32.844 "num_base_bdevs_discovered": 3, 00:13:32.844 "num_base_bdevs_operational": 3, 00:13:32.844 "base_bdevs_list": [ 00:13:32.844 { 00:13:32.844 "name": null, 00:13:32.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.844 "is_configured": false, 00:13:32.844 "data_offset": 0, 00:13:32.844 "data_size": 63488 00:13:32.844 }, 00:13:32.844 { 00:13:32.844 "name": "BaseBdev2", 00:13:32.844 "uuid": "c83b84af-9868-54b6-869d-438c142cdb29", 00:13:32.844 "is_configured": true, 00:13:32.844 "data_offset": 2048, 00:13:32.844 "data_size": 63488 00:13:32.844 }, 00:13:32.844 { 00:13:32.844 "name": "BaseBdev3", 00:13:32.844 "uuid": "82d885b0-7693-5228-bd86-effddc425f44", 00:13:32.844 "is_configured": true, 00:13:32.844 "data_offset": 2048, 00:13:32.844 "data_size": 63488 00:13:32.844 }, 00:13:32.844 { 00:13:32.844 "name": "BaseBdev4", 00:13:32.844 "uuid": "b9cad4e2-3e59-550b-97a7-2b90751cc18a", 00:13:32.844 "is_configured": true, 00:13:32.844 "data_offset": 2048, 00:13:32.844 "data_size": 63488 00:13:32.844 } 00:13:32.844 ] 00:13:32.844 }' 00:13:32.844 09:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.844 09:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.411 09:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:33.411 09:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.411 09:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.411 [2024-12-06 09:50:58.386312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:33.411 [2024-12-06 09:50:58.402767] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:13:33.411 09:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.411 09:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:33.411 [2024-12-06 09:50:58.404686] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:34.348 09:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:34.348 09:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.348 09:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:34.348 09:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:34.348 09:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.348 09:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.348 09:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.348 09:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.348 09:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.348 09:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.348 09:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.348 "name": "raid_bdev1", 00:13:34.348 "uuid": "fcbbe2b6-cc7e-47f0-a4f5-2f8ea4f9d816", 00:13:34.348 "strip_size_kb": 0, 00:13:34.348 "state": "online", 00:13:34.348 "raid_level": "raid1", 00:13:34.348 "superblock": true, 00:13:34.348 "num_base_bdevs": 4, 00:13:34.348 "num_base_bdevs_discovered": 4, 00:13:34.348 "num_base_bdevs_operational": 4, 00:13:34.348 "process": { 00:13:34.348 "type": "rebuild", 00:13:34.348 "target": "spare", 00:13:34.348 "progress": { 00:13:34.348 "blocks": 20480, 00:13:34.348 "percent": 32 00:13:34.348 } 00:13:34.348 }, 00:13:34.348 "base_bdevs_list": [ 00:13:34.348 { 00:13:34.348 "name": "spare", 00:13:34.348 "uuid": "bc4ce39f-0090-5e8a-ba6c-3e12ae933282", 00:13:34.348 "is_configured": true, 00:13:34.348 "data_offset": 2048, 00:13:34.348 "data_size": 63488 00:13:34.348 }, 00:13:34.348 { 00:13:34.348 "name": "BaseBdev2", 00:13:34.348 "uuid": "c83b84af-9868-54b6-869d-438c142cdb29", 00:13:34.348 "is_configured": true, 00:13:34.348 "data_offset": 2048, 00:13:34.348 "data_size": 63488 00:13:34.348 }, 00:13:34.348 { 00:13:34.348 "name": "BaseBdev3", 00:13:34.348 "uuid": "82d885b0-7693-5228-bd86-effddc425f44", 00:13:34.348 "is_configured": true, 00:13:34.348 "data_offset": 2048, 00:13:34.348 "data_size": 63488 00:13:34.348 }, 00:13:34.348 { 00:13:34.348 "name": "BaseBdev4", 00:13:34.348 "uuid": "b9cad4e2-3e59-550b-97a7-2b90751cc18a", 00:13:34.348 "is_configured": true, 00:13:34.348 "data_offset": 2048, 00:13:34.348 "data_size": 63488 00:13:34.348 } 00:13:34.348 ] 00:13:34.348 }' 00:13:34.348 09:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:34.348 09:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:34.348 09:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:34.348 09:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:34.348 09:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:34.348 09:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.348 09:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.348 [2024-12-06 09:50:59.540165] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:34.348 [2024-12-06 09:50:59.609686] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:34.348 [2024-12-06 09:50:59.609766] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.348 [2024-12-06 09:50:59.609783] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:34.348 [2024-12-06 09:50:59.609792] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:34.607 09:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.607 09:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:34.607 09:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.607 09:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.607 09:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.607 09:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.607 09:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:34.607 09:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.607 09:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.607 09:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.607 09:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.607 09:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.607 09:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.607 09:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.607 09:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.607 09:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.607 09:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.607 "name": "raid_bdev1", 00:13:34.607 "uuid": "fcbbe2b6-cc7e-47f0-a4f5-2f8ea4f9d816", 00:13:34.607 "strip_size_kb": 0, 00:13:34.607 "state": "online", 00:13:34.607 "raid_level": "raid1", 00:13:34.607 "superblock": true, 00:13:34.607 "num_base_bdevs": 4, 00:13:34.607 "num_base_bdevs_discovered": 3, 00:13:34.607 "num_base_bdevs_operational": 3, 00:13:34.607 "base_bdevs_list": [ 00:13:34.607 { 00:13:34.607 "name": null, 00:13:34.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.607 "is_configured": false, 00:13:34.607 "data_offset": 0, 00:13:34.607 "data_size": 63488 00:13:34.607 }, 00:13:34.607 { 00:13:34.607 "name": "BaseBdev2", 00:13:34.607 "uuid": "c83b84af-9868-54b6-869d-438c142cdb29", 00:13:34.607 "is_configured": true, 00:13:34.607 "data_offset": 2048, 00:13:34.607 "data_size": 63488 00:13:34.607 }, 00:13:34.607 { 00:13:34.607 "name": "BaseBdev3", 00:13:34.607 "uuid": "82d885b0-7693-5228-bd86-effddc425f44", 00:13:34.607 "is_configured": true, 00:13:34.607 "data_offset": 2048, 00:13:34.607 "data_size": 63488 00:13:34.607 }, 00:13:34.607 { 00:13:34.607 "name": "BaseBdev4", 00:13:34.607 "uuid": "b9cad4e2-3e59-550b-97a7-2b90751cc18a", 00:13:34.607 "is_configured": true, 00:13:34.607 "data_offset": 2048, 00:13:34.607 "data_size": 63488 00:13:34.607 } 00:13:34.607 ] 00:13:34.607 }' 00:13:34.607 09:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.607 09:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.866 09:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:34.866 09:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.866 09:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:34.866 09:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:34.866 09:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.866 09:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.866 09:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.866 09:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.866 09:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.866 09:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.867 09:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.867 "name": "raid_bdev1", 00:13:34.867 "uuid": "fcbbe2b6-cc7e-47f0-a4f5-2f8ea4f9d816", 00:13:34.867 "strip_size_kb": 0, 00:13:34.867 "state": "online", 00:13:34.867 "raid_level": "raid1", 00:13:34.867 "superblock": true, 00:13:34.867 "num_base_bdevs": 4, 00:13:34.867 "num_base_bdevs_discovered": 3, 00:13:34.867 "num_base_bdevs_operational": 3, 00:13:34.867 "base_bdevs_list": [ 00:13:34.867 { 00:13:34.867 "name": null, 00:13:34.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.867 "is_configured": false, 00:13:34.867 "data_offset": 0, 00:13:34.867 "data_size": 63488 00:13:34.867 }, 00:13:34.867 { 00:13:34.867 "name": "BaseBdev2", 00:13:34.867 "uuid": "c83b84af-9868-54b6-869d-438c142cdb29", 00:13:34.867 "is_configured": true, 00:13:34.867 "data_offset": 2048, 00:13:34.867 "data_size": 63488 00:13:34.867 }, 00:13:34.867 { 00:13:34.867 "name": "BaseBdev3", 00:13:34.867 "uuid": "82d885b0-7693-5228-bd86-effddc425f44", 00:13:34.867 "is_configured": true, 00:13:34.867 "data_offset": 2048, 00:13:34.867 "data_size": 63488 00:13:34.867 }, 00:13:34.867 { 00:13:34.867 "name": "BaseBdev4", 00:13:34.867 "uuid": "b9cad4e2-3e59-550b-97a7-2b90751cc18a", 00:13:34.867 "is_configured": true, 00:13:34.867 "data_offset": 2048, 00:13:34.867 "data_size": 63488 00:13:34.867 } 00:13:34.867 ] 00:13:34.867 }' 00:13:34.867 09:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.143 09:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:35.143 09:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.143 09:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:35.143 09:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:35.143 09:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.143 09:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.143 [2024-12-06 09:51:00.218017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:35.143 [2024-12-06 09:51:00.234429] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:13:35.143 09:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.143 09:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:35.143 [2024-12-06 09:51:00.236620] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:36.080 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.080 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.080 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.080 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.080 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.080 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.080 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.080 09:51:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.080 09:51:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.080 09:51:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.080 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.080 "name": "raid_bdev1", 00:13:36.080 "uuid": "fcbbe2b6-cc7e-47f0-a4f5-2f8ea4f9d816", 00:13:36.080 "strip_size_kb": 0, 00:13:36.080 "state": "online", 00:13:36.080 "raid_level": "raid1", 00:13:36.080 "superblock": true, 00:13:36.080 "num_base_bdevs": 4, 00:13:36.080 "num_base_bdevs_discovered": 4, 00:13:36.080 "num_base_bdevs_operational": 4, 00:13:36.080 "process": { 00:13:36.080 "type": "rebuild", 00:13:36.080 "target": "spare", 00:13:36.080 "progress": { 00:13:36.080 "blocks": 20480, 00:13:36.080 "percent": 32 00:13:36.080 } 00:13:36.080 }, 00:13:36.080 "base_bdevs_list": [ 00:13:36.080 { 00:13:36.080 "name": "spare", 00:13:36.080 "uuid": "bc4ce39f-0090-5e8a-ba6c-3e12ae933282", 00:13:36.080 "is_configured": true, 00:13:36.080 "data_offset": 2048, 00:13:36.080 "data_size": 63488 00:13:36.080 }, 00:13:36.080 { 00:13:36.080 "name": "BaseBdev2", 00:13:36.080 "uuid": "c83b84af-9868-54b6-869d-438c142cdb29", 00:13:36.080 "is_configured": true, 00:13:36.080 "data_offset": 2048, 00:13:36.080 "data_size": 63488 00:13:36.080 }, 00:13:36.080 { 00:13:36.080 "name": "BaseBdev3", 00:13:36.080 "uuid": "82d885b0-7693-5228-bd86-effddc425f44", 00:13:36.080 "is_configured": true, 00:13:36.080 "data_offset": 2048, 00:13:36.080 "data_size": 63488 00:13:36.080 }, 00:13:36.080 { 00:13:36.080 "name": "BaseBdev4", 00:13:36.080 "uuid": "b9cad4e2-3e59-550b-97a7-2b90751cc18a", 00:13:36.080 "is_configured": true, 00:13:36.080 "data_offset": 2048, 00:13:36.080 "data_size": 63488 00:13:36.080 } 00:13:36.080 ] 00:13:36.080 }' 00:13:36.081 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.081 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:36.081 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.339 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:36.339 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:36.339 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:36.339 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:36.339 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:36.339 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:36.340 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:36.340 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:36.340 09:51:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.340 09:51:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.340 [2024-12-06 09:51:01.399959] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:36.340 [2024-12-06 09:51:01.542016] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:13:36.340 09:51:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.340 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:36.340 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:36.340 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.340 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.340 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.340 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.340 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.340 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.340 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.340 09:51:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.340 09:51:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.340 09:51:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.340 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.340 "name": "raid_bdev1", 00:13:36.340 "uuid": "fcbbe2b6-cc7e-47f0-a4f5-2f8ea4f9d816", 00:13:36.340 "strip_size_kb": 0, 00:13:36.340 "state": "online", 00:13:36.340 "raid_level": "raid1", 00:13:36.340 "superblock": true, 00:13:36.340 "num_base_bdevs": 4, 00:13:36.340 "num_base_bdevs_discovered": 3, 00:13:36.340 "num_base_bdevs_operational": 3, 00:13:36.340 "process": { 00:13:36.340 "type": "rebuild", 00:13:36.340 "target": "spare", 00:13:36.340 "progress": { 00:13:36.340 "blocks": 24576, 00:13:36.340 "percent": 38 00:13:36.340 } 00:13:36.340 }, 00:13:36.340 "base_bdevs_list": [ 00:13:36.340 { 00:13:36.340 "name": "spare", 00:13:36.340 "uuid": "bc4ce39f-0090-5e8a-ba6c-3e12ae933282", 00:13:36.340 "is_configured": true, 00:13:36.340 "data_offset": 2048, 00:13:36.340 "data_size": 63488 00:13:36.340 }, 00:13:36.340 { 00:13:36.340 "name": null, 00:13:36.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.340 "is_configured": false, 00:13:36.340 "data_offset": 0, 00:13:36.340 "data_size": 63488 00:13:36.340 }, 00:13:36.340 { 00:13:36.340 "name": "BaseBdev3", 00:13:36.340 "uuid": "82d885b0-7693-5228-bd86-effddc425f44", 00:13:36.340 "is_configured": true, 00:13:36.340 "data_offset": 2048, 00:13:36.340 "data_size": 63488 00:13:36.340 }, 00:13:36.340 { 00:13:36.340 "name": "BaseBdev4", 00:13:36.340 "uuid": "b9cad4e2-3e59-550b-97a7-2b90751cc18a", 00:13:36.340 "is_configured": true, 00:13:36.340 "data_offset": 2048, 00:13:36.340 "data_size": 63488 00:13:36.340 } 00:13:36.340 ] 00:13:36.340 }' 00:13:36.340 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.598 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:36.598 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.598 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:36.598 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=455 00:13:36.598 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:36.598 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.598 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.598 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.598 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.598 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.598 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.598 09:51:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.598 09:51:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.598 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.599 09:51:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.599 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.599 "name": "raid_bdev1", 00:13:36.599 "uuid": "fcbbe2b6-cc7e-47f0-a4f5-2f8ea4f9d816", 00:13:36.599 "strip_size_kb": 0, 00:13:36.599 "state": "online", 00:13:36.599 "raid_level": "raid1", 00:13:36.599 "superblock": true, 00:13:36.599 "num_base_bdevs": 4, 00:13:36.599 "num_base_bdevs_discovered": 3, 00:13:36.599 "num_base_bdevs_operational": 3, 00:13:36.599 "process": { 00:13:36.599 "type": "rebuild", 00:13:36.599 "target": "spare", 00:13:36.599 "progress": { 00:13:36.599 "blocks": 26624, 00:13:36.599 "percent": 41 00:13:36.599 } 00:13:36.599 }, 00:13:36.599 "base_bdevs_list": [ 00:13:36.599 { 00:13:36.599 "name": "spare", 00:13:36.599 "uuid": "bc4ce39f-0090-5e8a-ba6c-3e12ae933282", 00:13:36.599 "is_configured": true, 00:13:36.599 "data_offset": 2048, 00:13:36.599 "data_size": 63488 00:13:36.599 }, 00:13:36.599 { 00:13:36.599 "name": null, 00:13:36.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.599 "is_configured": false, 00:13:36.599 "data_offset": 0, 00:13:36.599 "data_size": 63488 00:13:36.599 }, 00:13:36.599 { 00:13:36.599 "name": "BaseBdev3", 00:13:36.599 "uuid": "82d885b0-7693-5228-bd86-effddc425f44", 00:13:36.599 "is_configured": true, 00:13:36.599 "data_offset": 2048, 00:13:36.599 "data_size": 63488 00:13:36.599 }, 00:13:36.599 { 00:13:36.599 "name": "BaseBdev4", 00:13:36.599 "uuid": "b9cad4e2-3e59-550b-97a7-2b90751cc18a", 00:13:36.599 "is_configured": true, 00:13:36.599 "data_offset": 2048, 00:13:36.599 "data_size": 63488 00:13:36.599 } 00:13:36.599 ] 00:13:36.599 }' 00:13:36.599 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.599 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:36.599 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.599 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:36.599 09:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:37.993 09:51:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:37.993 09:51:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.993 09:51:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.993 09:51:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.993 09:51:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.993 09:51:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.993 09:51:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.993 09:51:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.993 09:51:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.993 09:51:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.993 09:51:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.993 09:51:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.993 "name": "raid_bdev1", 00:13:37.993 "uuid": "fcbbe2b6-cc7e-47f0-a4f5-2f8ea4f9d816", 00:13:37.993 "strip_size_kb": 0, 00:13:37.993 "state": "online", 00:13:37.993 "raid_level": "raid1", 00:13:37.993 "superblock": true, 00:13:37.993 "num_base_bdevs": 4, 00:13:37.993 "num_base_bdevs_discovered": 3, 00:13:37.993 "num_base_bdevs_operational": 3, 00:13:37.993 "process": { 00:13:37.993 "type": "rebuild", 00:13:37.993 "target": "spare", 00:13:37.993 "progress": { 00:13:37.993 "blocks": 49152, 00:13:37.993 "percent": 77 00:13:37.993 } 00:13:37.993 }, 00:13:37.993 "base_bdevs_list": [ 00:13:37.993 { 00:13:37.993 "name": "spare", 00:13:37.993 "uuid": "bc4ce39f-0090-5e8a-ba6c-3e12ae933282", 00:13:37.993 "is_configured": true, 00:13:37.993 "data_offset": 2048, 00:13:37.993 "data_size": 63488 00:13:37.993 }, 00:13:37.993 { 00:13:37.993 "name": null, 00:13:37.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.993 "is_configured": false, 00:13:37.993 "data_offset": 0, 00:13:37.993 "data_size": 63488 00:13:37.993 }, 00:13:37.993 { 00:13:37.993 "name": "BaseBdev3", 00:13:37.993 "uuid": "82d885b0-7693-5228-bd86-effddc425f44", 00:13:37.993 "is_configured": true, 00:13:37.993 "data_offset": 2048, 00:13:37.993 "data_size": 63488 00:13:37.993 }, 00:13:37.993 { 00:13:37.993 "name": "BaseBdev4", 00:13:37.993 "uuid": "b9cad4e2-3e59-550b-97a7-2b90751cc18a", 00:13:37.993 "is_configured": true, 00:13:37.993 "data_offset": 2048, 00:13:37.993 "data_size": 63488 00:13:37.993 } 00:13:37.993 ] 00:13:37.993 }' 00:13:37.993 09:51:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.994 09:51:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:37.994 09:51:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.994 09:51:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.994 09:51:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:38.259 [2024-12-06 09:51:03.451191] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:38.259 [2024-12-06 09:51:03.451366] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:38.259 [2024-12-06 09:51:03.451520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.837 09:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:38.837 09:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:38.837 09:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.837 09:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:38.837 09:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:38.837 09:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.837 09:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.837 09:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.837 09:51:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.837 09:51:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.837 09:51:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.837 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.837 "name": "raid_bdev1", 00:13:38.837 "uuid": "fcbbe2b6-cc7e-47f0-a4f5-2f8ea4f9d816", 00:13:38.837 "strip_size_kb": 0, 00:13:38.837 "state": "online", 00:13:38.837 "raid_level": "raid1", 00:13:38.837 "superblock": true, 00:13:38.837 "num_base_bdevs": 4, 00:13:38.837 "num_base_bdevs_discovered": 3, 00:13:38.837 "num_base_bdevs_operational": 3, 00:13:38.837 "base_bdevs_list": [ 00:13:38.837 { 00:13:38.837 "name": "spare", 00:13:38.837 "uuid": "bc4ce39f-0090-5e8a-ba6c-3e12ae933282", 00:13:38.837 "is_configured": true, 00:13:38.837 "data_offset": 2048, 00:13:38.837 "data_size": 63488 00:13:38.837 }, 00:13:38.837 { 00:13:38.837 "name": null, 00:13:38.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.837 "is_configured": false, 00:13:38.837 "data_offset": 0, 00:13:38.837 "data_size": 63488 00:13:38.837 }, 00:13:38.837 { 00:13:38.837 "name": "BaseBdev3", 00:13:38.837 "uuid": "82d885b0-7693-5228-bd86-effddc425f44", 00:13:38.837 "is_configured": true, 00:13:38.837 "data_offset": 2048, 00:13:38.837 "data_size": 63488 00:13:38.837 }, 00:13:38.837 { 00:13:38.837 "name": "BaseBdev4", 00:13:38.837 "uuid": "b9cad4e2-3e59-550b-97a7-2b90751cc18a", 00:13:38.837 "is_configured": true, 00:13:38.837 "data_offset": 2048, 00:13:38.837 "data_size": 63488 00:13:38.837 } 00:13:38.837 ] 00:13:38.837 }' 00:13:38.837 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.837 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:38.837 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.837 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:38.837 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:38.837 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:38.837 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.837 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:38.837 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:38.837 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.837 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.837 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.837 09:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.837 09:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.837 09:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.097 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.097 "name": "raid_bdev1", 00:13:39.097 "uuid": "fcbbe2b6-cc7e-47f0-a4f5-2f8ea4f9d816", 00:13:39.097 "strip_size_kb": 0, 00:13:39.097 "state": "online", 00:13:39.097 "raid_level": "raid1", 00:13:39.097 "superblock": true, 00:13:39.097 "num_base_bdevs": 4, 00:13:39.097 "num_base_bdevs_discovered": 3, 00:13:39.097 "num_base_bdevs_operational": 3, 00:13:39.097 "base_bdevs_list": [ 00:13:39.097 { 00:13:39.097 "name": "spare", 00:13:39.097 "uuid": "bc4ce39f-0090-5e8a-ba6c-3e12ae933282", 00:13:39.097 "is_configured": true, 00:13:39.097 "data_offset": 2048, 00:13:39.097 "data_size": 63488 00:13:39.097 }, 00:13:39.097 { 00:13:39.097 "name": null, 00:13:39.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.097 "is_configured": false, 00:13:39.097 "data_offset": 0, 00:13:39.097 "data_size": 63488 00:13:39.097 }, 00:13:39.097 { 00:13:39.097 "name": "BaseBdev3", 00:13:39.097 "uuid": "82d885b0-7693-5228-bd86-effddc425f44", 00:13:39.097 "is_configured": true, 00:13:39.097 "data_offset": 2048, 00:13:39.097 "data_size": 63488 00:13:39.097 }, 00:13:39.097 { 00:13:39.097 "name": "BaseBdev4", 00:13:39.097 "uuid": "b9cad4e2-3e59-550b-97a7-2b90751cc18a", 00:13:39.097 "is_configured": true, 00:13:39.097 "data_offset": 2048, 00:13:39.097 "data_size": 63488 00:13:39.097 } 00:13:39.097 ] 00:13:39.097 }' 00:13:39.097 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.097 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:39.097 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.097 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:39.097 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:39.097 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.097 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.097 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.097 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.097 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:39.097 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.097 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.097 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.097 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.097 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.097 09:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.097 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.097 09:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.097 09:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.097 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.097 "name": "raid_bdev1", 00:13:39.097 "uuid": "fcbbe2b6-cc7e-47f0-a4f5-2f8ea4f9d816", 00:13:39.097 "strip_size_kb": 0, 00:13:39.097 "state": "online", 00:13:39.097 "raid_level": "raid1", 00:13:39.097 "superblock": true, 00:13:39.097 "num_base_bdevs": 4, 00:13:39.097 "num_base_bdevs_discovered": 3, 00:13:39.097 "num_base_bdevs_operational": 3, 00:13:39.097 "base_bdevs_list": [ 00:13:39.097 { 00:13:39.097 "name": "spare", 00:13:39.097 "uuid": "bc4ce39f-0090-5e8a-ba6c-3e12ae933282", 00:13:39.097 "is_configured": true, 00:13:39.097 "data_offset": 2048, 00:13:39.097 "data_size": 63488 00:13:39.097 }, 00:13:39.097 { 00:13:39.097 "name": null, 00:13:39.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.097 "is_configured": false, 00:13:39.097 "data_offset": 0, 00:13:39.097 "data_size": 63488 00:13:39.097 }, 00:13:39.097 { 00:13:39.097 "name": "BaseBdev3", 00:13:39.097 "uuid": "82d885b0-7693-5228-bd86-effddc425f44", 00:13:39.097 "is_configured": true, 00:13:39.097 "data_offset": 2048, 00:13:39.097 "data_size": 63488 00:13:39.097 }, 00:13:39.097 { 00:13:39.098 "name": "BaseBdev4", 00:13:39.098 "uuid": "b9cad4e2-3e59-550b-97a7-2b90751cc18a", 00:13:39.098 "is_configured": true, 00:13:39.098 "data_offset": 2048, 00:13:39.098 "data_size": 63488 00:13:39.098 } 00:13:39.098 ] 00:13:39.098 }' 00:13:39.098 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.098 09:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.357 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:39.357 09:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.357 09:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.616 [2024-12-06 09:51:04.632097] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:39.616 [2024-12-06 09:51:04.632215] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:39.616 [2024-12-06 09:51:04.632334] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:39.616 [2024-12-06 09:51:04.632457] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:39.616 [2024-12-06 09:51:04.632511] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:39.616 09:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.616 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:39.616 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.616 09:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.616 09:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.616 09:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.616 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:39.616 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:39.616 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:39.616 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:39.616 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:39.616 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:39.616 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:39.616 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:39.616 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:39.616 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:39.616 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:39.616 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:39.616 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:39.616 /dev/nbd0 00:13:39.876 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:39.876 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:39.876 09:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:39.876 09:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:39.876 09:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:39.876 09:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:39.876 09:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:39.876 09:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:39.876 09:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:39.876 09:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:39.876 09:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:39.876 1+0 records in 00:13:39.876 1+0 records out 00:13:39.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314526 s, 13.0 MB/s 00:13:39.876 09:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.876 09:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:39.876 09:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.876 09:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:39.876 09:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:39.876 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:39.876 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:39.876 09:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:39.876 /dev/nbd1 00:13:40.135 09:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:40.135 09:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:40.135 09:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:40.135 09:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:40.135 09:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:40.135 09:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:40.135 09:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:40.135 09:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:40.135 09:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:40.135 09:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:40.135 09:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:40.135 1+0 records in 00:13:40.135 1+0 records out 00:13:40.135 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246343 s, 16.6 MB/s 00:13:40.135 09:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.135 09:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:40.135 09:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.135 09:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:40.135 09:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:40.135 09:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:40.135 09:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:40.135 09:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:40.135 09:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:40.135 09:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:40.135 09:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:40.135 09:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:40.135 09:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:40.135 09:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:40.135 09:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:40.393 09:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:40.393 09:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:40.393 09:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:40.393 09:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:40.393 09:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:40.393 09:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:40.393 09:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:40.393 09:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:40.393 09:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:40.393 09:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:40.651 09:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:40.651 09:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:40.651 09:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:40.651 09:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:40.651 09:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:40.651 09:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:40.651 09:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:40.651 09:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:40.651 09:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:40.651 09:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:40.651 09:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.651 09:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.651 09:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.651 09:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:40.651 09:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.651 09:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.651 [2024-12-06 09:51:05.893760] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:40.651 [2024-12-06 09:51:05.893821] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.651 [2024-12-06 09:51:05.893845] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:40.651 [2024-12-06 09:51:05.893856] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.651 [2024-12-06 09:51:05.896454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.651 [2024-12-06 09:51:05.896567] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:40.651 [2024-12-06 09:51:05.896701] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:40.651 [2024-12-06 09:51:05.896776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:40.651 [2024-12-06 09:51:05.896967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:40.651 [2024-12-06 09:51:05.897086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:40.651 spare 00:13:40.651 09:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.651 09:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:40.651 09:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.651 09:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.909 [2024-12-06 09:51:05.997019] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:40.909 [2024-12-06 09:51:05.997098] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:40.909 [2024-12-06 09:51:05.997594] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:40.909 [2024-12-06 09:51:05.997813] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:40.909 [2024-12-06 09:51:05.997832] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:40.909 [2024-12-06 09:51:05.998032] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:40.909 09:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.909 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:40.909 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.909 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:40.909 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.909 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.909 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:40.909 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.909 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.909 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.909 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.909 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.909 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.909 09:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.909 09:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.909 09:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.909 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.909 "name": "raid_bdev1", 00:13:40.909 "uuid": "fcbbe2b6-cc7e-47f0-a4f5-2f8ea4f9d816", 00:13:40.909 "strip_size_kb": 0, 00:13:40.909 "state": "online", 00:13:40.909 "raid_level": "raid1", 00:13:40.909 "superblock": true, 00:13:40.909 "num_base_bdevs": 4, 00:13:40.909 "num_base_bdevs_discovered": 3, 00:13:40.909 "num_base_bdevs_operational": 3, 00:13:40.909 "base_bdevs_list": [ 00:13:40.909 { 00:13:40.909 "name": "spare", 00:13:40.909 "uuid": "bc4ce39f-0090-5e8a-ba6c-3e12ae933282", 00:13:40.909 "is_configured": true, 00:13:40.909 "data_offset": 2048, 00:13:40.909 "data_size": 63488 00:13:40.909 }, 00:13:40.909 { 00:13:40.909 "name": null, 00:13:40.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.909 "is_configured": false, 00:13:40.909 "data_offset": 2048, 00:13:40.909 "data_size": 63488 00:13:40.909 }, 00:13:40.909 { 00:13:40.909 "name": "BaseBdev3", 00:13:40.909 "uuid": "82d885b0-7693-5228-bd86-effddc425f44", 00:13:40.909 "is_configured": true, 00:13:40.909 "data_offset": 2048, 00:13:40.909 "data_size": 63488 00:13:40.909 }, 00:13:40.909 { 00:13:40.909 "name": "BaseBdev4", 00:13:40.909 "uuid": "b9cad4e2-3e59-550b-97a7-2b90751cc18a", 00:13:40.909 "is_configured": true, 00:13:40.909 "data_offset": 2048, 00:13:40.909 "data_size": 63488 00:13:40.909 } 00:13:40.909 ] 00:13:40.909 }' 00:13:40.909 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.909 09:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.167 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:41.167 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.167 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:41.167 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:41.167 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.167 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.167 09:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.167 09:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.167 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.167 09:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.426 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.426 "name": "raid_bdev1", 00:13:41.426 "uuid": "fcbbe2b6-cc7e-47f0-a4f5-2f8ea4f9d816", 00:13:41.426 "strip_size_kb": 0, 00:13:41.426 "state": "online", 00:13:41.426 "raid_level": "raid1", 00:13:41.426 "superblock": true, 00:13:41.426 "num_base_bdevs": 4, 00:13:41.426 "num_base_bdevs_discovered": 3, 00:13:41.426 "num_base_bdevs_operational": 3, 00:13:41.426 "base_bdevs_list": [ 00:13:41.426 { 00:13:41.427 "name": "spare", 00:13:41.427 "uuid": "bc4ce39f-0090-5e8a-ba6c-3e12ae933282", 00:13:41.427 "is_configured": true, 00:13:41.427 "data_offset": 2048, 00:13:41.427 "data_size": 63488 00:13:41.427 }, 00:13:41.427 { 00:13:41.427 "name": null, 00:13:41.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.427 "is_configured": false, 00:13:41.427 "data_offset": 2048, 00:13:41.427 "data_size": 63488 00:13:41.427 }, 00:13:41.427 { 00:13:41.427 "name": "BaseBdev3", 00:13:41.427 "uuid": "82d885b0-7693-5228-bd86-effddc425f44", 00:13:41.427 "is_configured": true, 00:13:41.427 "data_offset": 2048, 00:13:41.427 "data_size": 63488 00:13:41.427 }, 00:13:41.427 { 00:13:41.427 "name": "BaseBdev4", 00:13:41.427 "uuid": "b9cad4e2-3e59-550b-97a7-2b90751cc18a", 00:13:41.427 "is_configured": true, 00:13:41.427 "data_offset": 2048, 00:13:41.427 "data_size": 63488 00:13:41.427 } 00:13:41.427 ] 00:13:41.427 }' 00:13:41.427 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.427 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:41.427 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.427 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:41.427 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.427 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:41.427 09:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.427 09:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.427 09:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.427 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.427 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:41.427 09:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.427 09:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.427 [2024-12-06 09:51:06.629063] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:41.427 09:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.427 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:41.427 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.427 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.427 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.427 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.427 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:41.427 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.427 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.427 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.427 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.427 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.427 09:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.427 09:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.427 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.427 09:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.427 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.427 "name": "raid_bdev1", 00:13:41.427 "uuid": "fcbbe2b6-cc7e-47f0-a4f5-2f8ea4f9d816", 00:13:41.427 "strip_size_kb": 0, 00:13:41.427 "state": "online", 00:13:41.427 "raid_level": "raid1", 00:13:41.427 "superblock": true, 00:13:41.427 "num_base_bdevs": 4, 00:13:41.427 "num_base_bdevs_discovered": 2, 00:13:41.427 "num_base_bdevs_operational": 2, 00:13:41.427 "base_bdevs_list": [ 00:13:41.427 { 00:13:41.427 "name": null, 00:13:41.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.427 "is_configured": false, 00:13:41.427 "data_offset": 0, 00:13:41.427 "data_size": 63488 00:13:41.427 }, 00:13:41.427 { 00:13:41.427 "name": null, 00:13:41.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.427 "is_configured": false, 00:13:41.427 "data_offset": 2048, 00:13:41.427 "data_size": 63488 00:13:41.427 }, 00:13:41.427 { 00:13:41.427 "name": "BaseBdev3", 00:13:41.427 "uuid": "82d885b0-7693-5228-bd86-effddc425f44", 00:13:41.427 "is_configured": true, 00:13:41.427 "data_offset": 2048, 00:13:41.427 "data_size": 63488 00:13:41.427 }, 00:13:41.427 { 00:13:41.427 "name": "BaseBdev4", 00:13:41.427 "uuid": "b9cad4e2-3e59-550b-97a7-2b90751cc18a", 00:13:41.427 "is_configured": true, 00:13:41.427 "data_offset": 2048, 00:13:41.427 "data_size": 63488 00:13:41.427 } 00:13:41.427 ] 00:13:41.427 }' 00:13:41.427 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.427 09:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.996 09:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:41.996 09:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.996 09:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.996 [2024-12-06 09:51:07.124275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:41.996 [2024-12-06 09:51:07.124592] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:41.996 [2024-12-06 09:51:07.124617] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:41.996 [2024-12-06 09:51:07.124662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:41.996 [2024-12-06 09:51:07.141998] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:13:41.996 09:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.996 09:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:41.997 [2024-12-06 09:51:07.144150] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:42.955 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:42.955 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.955 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:42.955 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:42.955 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.955 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.955 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.955 09:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.955 09:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.955 09:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.955 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.955 "name": "raid_bdev1", 00:13:42.955 "uuid": "fcbbe2b6-cc7e-47f0-a4f5-2f8ea4f9d816", 00:13:42.955 "strip_size_kb": 0, 00:13:42.955 "state": "online", 00:13:42.955 "raid_level": "raid1", 00:13:42.955 "superblock": true, 00:13:42.955 "num_base_bdevs": 4, 00:13:42.955 "num_base_bdevs_discovered": 3, 00:13:42.955 "num_base_bdevs_operational": 3, 00:13:42.955 "process": { 00:13:42.955 "type": "rebuild", 00:13:42.955 "target": "spare", 00:13:42.955 "progress": { 00:13:42.955 "blocks": 20480, 00:13:42.955 "percent": 32 00:13:42.955 } 00:13:42.955 }, 00:13:42.955 "base_bdevs_list": [ 00:13:42.955 { 00:13:42.955 "name": "spare", 00:13:42.955 "uuid": "bc4ce39f-0090-5e8a-ba6c-3e12ae933282", 00:13:42.955 "is_configured": true, 00:13:42.955 "data_offset": 2048, 00:13:42.955 "data_size": 63488 00:13:42.955 }, 00:13:42.955 { 00:13:42.956 "name": null, 00:13:42.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.956 "is_configured": false, 00:13:42.956 "data_offset": 2048, 00:13:42.956 "data_size": 63488 00:13:42.956 }, 00:13:42.956 { 00:13:42.956 "name": "BaseBdev3", 00:13:42.956 "uuid": "82d885b0-7693-5228-bd86-effddc425f44", 00:13:42.956 "is_configured": true, 00:13:42.956 "data_offset": 2048, 00:13:42.956 "data_size": 63488 00:13:42.956 }, 00:13:42.956 { 00:13:42.956 "name": "BaseBdev4", 00:13:42.956 "uuid": "b9cad4e2-3e59-550b-97a7-2b90751cc18a", 00:13:42.956 "is_configured": true, 00:13:42.956 "data_offset": 2048, 00:13:42.956 "data_size": 63488 00:13:42.956 } 00:13:42.956 ] 00:13:42.956 }' 00:13:42.956 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.215 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:43.215 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.215 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:43.215 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:43.215 09:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.215 09:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.215 [2024-12-06 09:51:08.292069] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:43.215 [2024-12-06 09:51:08.349757] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:43.215 [2024-12-06 09:51:08.349831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.215 [2024-12-06 09:51:08.349854] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:43.215 [2024-12-06 09:51:08.349862] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:43.215 09:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.215 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:43.215 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.215 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.215 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.216 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.216 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:43.216 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.216 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.216 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.216 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.216 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.216 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.216 09:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.216 09:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.216 09:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.216 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.216 "name": "raid_bdev1", 00:13:43.216 "uuid": "fcbbe2b6-cc7e-47f0-a4f5-2f8ea4f9d816", 00:13:43.216 "strip_size_kb": 0, 00:13:43.216 "state": "online", 00:13:43.216 "raid_level": "raid1", 00:13:43.216 "superblock": true, 00:13:43.216 "num_base_bdevs": 4, 00:13:43.216 "num_base_bdevs_discovered": 2, 00:13:43.216 "num_base_bdevs_operational": 2, 00:13:43.216 "base_bdevs_list": [ 00:13:43.216 { 00:13:43.216 "name": null, 00:13:43.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.216 "is_configured": false, 00:13:43.216 "data_offset": 0, 00:13:43.216 "data_size": 63488 00:13:43.216 }, 00:13:43.216 { 00:13:43.216 "name": null, 00:13:43.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.216 "is_configured": false, 00:13:43.216 "data_offset": 2048, 00:13:43.216 "data_size": 63488 00:13:43.216 }, 00:13:43.216 { 00:13:43.216 "name": "BaseBdev3", 00:13:43.216 "uuid": "82d885b0-7693-5228-bd86-effddc425f44", 00:13:43.216 "is_configured": true, 00:13:43.216 "data_offset": 2048, 00:13:43.216 "data_size": 63488 00:13:43.216 }, 00:13:43.216 { 00:13:43.216 "name": "BaseBdev4", 00:13:43.216 "uuid": "b9cad4e2-3e59-550b-97a7-2b90751cc18a", 00:13:43.216 "is_configured": true, 00:13:43.216 "data_offset": 2048, 00:13:43.216 "data_size": 63488 00:13:43.216 } 00:13:43.216 ] 00:13:43.216 }' 00:13:43.216 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.216 09:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.784 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:43.784 09:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.784 09:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.784 [2024-12-06 09:51:08.867976] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:43.784 [2024-12-06 09:51:08.868111] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.784 [2024-12-06 09:51:08.868189] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:13:43.784 [2024-12-06 09:51:08.868263] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.784 [2024-12-06 09:51:08.868860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.784 [2024-12-06 09:51:08.868931] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:43.784 [2024-12-06 09:51:08.869100] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:43.784 [2024-12-06 09:51:08.869167] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:43.784 [2024-12-06 09:51:08.869259] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:43.784 [2024-12-06 09:51:08.869337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:43.784 [2024-12-06 09:51:08.887240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:13:43.784 spare 00:13:43.784 09:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.784 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:43.784 [2024-12-06 09:51:08.889422] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:44.792 09:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:44.792 09:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.792 09:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:44.792 09:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:44.792 09:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.792 09:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.792 09:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.792 09:51:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.792 09:51:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.792 09:51:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.792 09:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:44.792 "name": "raid_bdev1", 00:13:44.792 "uuid": "fcbbe2b6-cc7e-47f0-a4f5-2f8ea4f9d816", 00:13:44.792 "strip_size_kb": 0, 00:13:44.792 "state": "online", 00:13:44.792 "raid_level": "raid1", 00:13:44.792 "superblock": true, 00:13:44.792 "num_base_bdevs": 4, 00:13:44.792 "num_base_bdevs_discovered": 3, 00:13:44.792 "num_base_bdevs_operational": 3, 00:13:44.792 "process": { 00:13:44.792 "type": "rebuild", 00:13:44.792 "target": "spare", 00:13:44.792 "progress": { 00:13:44.792 "blocks": 20480, 00:13:44.792 "percent": 32 00:13:44.792 } 00:13:44.792 }, 00:13:44.792 "base_bdevs_list": [ 00:13:44.792 { 00:13:44.792 "name": "spare", 00:13:44.792 "uuid": "bc4ce39f-0090-5e8a-ba6c-3e12ae933282", 00:13:44.792 "is_configured": true, 00:13:44.792 "data_offset": 2048, 00:13:44.792 "data_size": 63488 00:13:44.792 }, 00:13:44.792 { 00:13:44.792 "name": null, 00:13:44.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.792 "is_configured": false, 00:13:44.793 "data_offset": 2048, 00:13:44.793 "data_size": 63488 00:13:44.793 }, 00:13:44.793 { 00:13:44.793 "name": "BaseBdev3", 00:13:44.793 "uuid": "82d885b0-7693-5228-bd86-effddc425f44", 00:13:44.793 "is_configured": true, 00:13:44.793 "data_offset": 2048, 00:13:44.793 "data_size": 63488 00:13:44.793 }, 00:13:44.793 { 00:13:44.793 "name": "BaseBdev4", 00:13:44.793 "uuid": "b9cad4e2-3e59-550b-97a7-2b90751cc18a", 00:13:44.793 "is_configured": true, 00:13:44.793 "data_offset": 2048, 00:13:44.793 "data_size": 63488 00:13:44.793 } 00:13:44.793 ] 00:13:44.793 }' 00:13:44.793 09:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:44.793 09:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:44.793 09:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.793 09:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:44.793 09:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:44.793 09:51:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.793 09:51:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.793 [2024-12-06 09:51:10.048363] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:45.052 [2024-12-06 09:51:10.095248] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:45.052 [2024-12-06 09:51:10.095319] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.053 [2024-12-06 09:51:10.095338] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:45.053 [2024-12-06 09:51:10.095349] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:45.053 09:51:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.053 09:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:45.053 09:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.053 09:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.053 09:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.053 09:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.053 09:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:45.053 09:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.053 09:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.053 09:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.053 09:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.053 09:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.053 09:51:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.053 09:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.053 09:51:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.053 09:51:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.053 09:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.053 "name": "raid_bdev1", 00:13:45.053 "uuid": "fcbbe2b6-cc7e-47f0-a4f5-2f8ea4f9d816", 00:13:45.053 "strip_size_kb": 0, 00:13:45.053 "state": "online", 00:13:45.053 "raid_level": "raid1", 00:13:45.053 "superblock": true, 00:13:45.053 "num_base_bdevs": 4, 00:13:45.053 "num_base_bdevs_discovered": 2, 00:13:45.053 "num_base_bdevs_operational": 2, 00:13:45.053 "base_bdevs_list": [ 00:13:45.053 { 00:13:45.053 "name": null, 00:13:45.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.053 "is_configured": false, 00:13:45.053 "data_offset": 0, 00:13:45.053 "data_size": 63488 00:13:45.053 }, 00:13:45.053 { 00:13:45.053 "name": null, 00:13:45.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.053 "is_configured": false, 00:13:45.053 "data_offset": 2048, 00:13:45.053 "data_size": 63488 00:13:45.053 }, 00:13:45.053 { 00:13:45.053 "name": "BaseBdev3", 00:13:45.053 "uuid": "82d885b0-7693-5228-bd86-effddc425f44", 00:13:45.053 "is_configured": true, 00:13:45.053 "data_offset": 2048, 00:13:45.053 "data_size": 63488 00:13:45.053 }, 00:13:45.053 { 00:13:45.053 "name": "BaseBdev4", 00:13:45.053 "uuid": "b9cad4e2-3e59-550b-97a7-2b90751cc18a", 00:13:45.053 "is_configured": true, 00:13:45.053 "data_offset": 2048, 00:13:45.053 "data_size": 63488 00:13:45.053 } 00:13:45.053 ] 00:13:45.053 }' 00:13:45.053 09:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.053 09:51:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.313 09:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:45.313 09:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.313 09:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:45.313 09:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:45.313 09:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.313 09:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.313 09:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.313 09:51:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.313 09:51:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.573 09:51:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.573 09:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.573 "name": "raid_bdev1", 00:13:45.573 "uuid": "fcbbe2b6-cc7e-47f0-a4f5-2f8ea4f9d816", 00:13:45.573 "strip_size_kb": 0, 00:13:45.573 "state": "online", 00:13:45.573 "raid_level": "raid1", 00:13:45.573 "superblock": true, 00:13:45.573 "num_base_bdevs": 4, 00:13:45.573 "num_base_bdevs_discovered": 2, 00:13:45.573 "num_base_bdevs_operational": 2, 00:13:45.573 "base_bdevs_list": [ 00:13:45.573 { 00:13:45.573 "name": null, 00:13:45.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.573 "is_configured": false, 00:13:45.573 "data_offset": 0, 00:13:45.573 "data_size": 63488 00:13:45.573 }, 00:13:45.573 { 00:13:45.573 "name": null, 00:13:45.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.573 "is_configured": false, 00:13:45.573 "data_offset": 2048, 00:13:45.573 "data_size": 63488 00:13:45.573 }, 00:13:45.573 { 00:13:45.573 "name": "BaseBdev3", 00:13:45.573 "uuid": "82d885b0-7693-5228-bd86-effddc425f44", 00:13:45.573 "is_configured": true, 00:13:45.573 "data_offset": 2048, 00:13:45.573 "data_size": 63488 00:13:45.573 }, 00:13:45.573 { 00:13:45.573 "name": "BaseBdev4", 00:13:45.573 "uuid": "b9cad4e2-3e59-550b-97a7-2b90751cc18a", 00:13:45.573 "is_configured": true, 00:13:45.573 "data_offset": 2048, 00:13:45.573 "data_size": 63488 00:13:45.573 } 00:13:45.573 ] 00:13:45.573 }' 00:13:45.573 09:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.573 09:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:45.573 09:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.573 09:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:45.573 09:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:45.573 09:51:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.573 09:51:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.573 09:51:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.573 09:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:45.573 09:51:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.573 09:51:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.573 [2024-12-06 09:51:10.729748] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:45.573 [2024-12-06 09:51:10.729808] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.573 [2024-12-06 09:51:10.729827] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:13:45.573 [2024-12-06 09:51:10.729837] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.573 [2024-12-06 09:51:10.730311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.573 [2024-12-06 09:51:10.730338] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:45.573 [2024-12-06 09:51:10.730425] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:45.573 [2024-12-06 09:51:10.730450] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:45.573 [2024-12-06 09:51:10.730459] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:45.573 [2024-12-06 09:51:10.730490] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:45.573 BaseBdev1 00:13:45.574 09:51:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.574 09:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:46.513 09:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:46.513 09:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.513 09:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.513 09:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.513 09:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.513 09:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:46.513 09:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.513 09:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.513 09:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.513 09:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.513 09:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.513 09:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.513 09:51:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.513 09:51:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.513 09:51:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.774 09:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.774 "name": "raid_bdev1", 00:13:46.774 "uuid": "fcbbe2b6-cc7e-47f0-a4f5-2f8ea4f9d816", 00:13:46.774 "strip_size_kb": 0, 00:13:46.774 "state": "online", 00:13:46.774 "raid_level": "raid1", 00:13:46.774 "superblock": true, 00:13:46.774 "num_base_bdevs": 4, 00:13:46.774 "num_base_bdevs_discovered": 2, 00:13:46.774 "num_base_bdevs_operational": 2, 00:13:46.774 "base_bdevs_list": [ 00:13:46.774 { 00:13:46.774 "name": null, 00:13:46.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.774 "is_configured": false, 00:13:46.774 "data_offset": 0, 00:13:46.774 "data_size": 63488 00:13:46.774 }, 00:13:46.774 { 00:13:46.774 "name": null, 00:13:46.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.774 "is_configured": false, 00:13:46.774 "data_offset": 2048, 00:13:46.774 "data_size": 63488 00:13:46.774 }, 00:13:46.774 { 00:13:46.774 "name": "BaseBdev3", 00:13:46.774 "uuid": "82d885b0-7693-5228-bd86-effddc425f44", 00:13:46.774 "is_configured": true, 00:13:46.774 "data_offset": 2048, 00:13:46.774 "data_size": 63488 00:13:46.774 }, 00:13:46.774 { 00:13:46.774 "name": "BaseBdev4", 00:13:46.774 "uuid": "b9cad4e2-3e59-550b-97a7-2b90751cc18a", 00:13:46.774 "is_configured": true, 00:13:46.774 "data_offset": 2048, 00:13:46.774 "data_size": 63488 00:13:46.774 } 00:13:46.774 ] 00:13:46.774 }' 00:13:46.774 09:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.774 09:51:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.034 09:51:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:47.034 09:51:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.034 09:51:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:47.034 09:51:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:47.034 09:51:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.034 09:51:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.034 09:51:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.034 09:51:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.034 09:51:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.034 09:51:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.034 09:51:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.034 "name": "raid_bdev1", 00:13:47.034 "uuid": "fcbbe2b6-cc7e-47f0-a4f5-2f8ea4f9d816", 00:13:47.034 "strip_size_kb": 0, 00:13:47.034 "state": "online", 00:13:47.034 "raid_level": "raid1", 00:13:47.034 "superblock": true, 00:13:47.034 "num_base_bdevs": 4, 00:13:47.034 "num_base_bdevs_discovered": 2, 00:13:47.034 "num_base_bdevs_operational": 2, 00:13:47.034 "base_bdevs_list": [ 00:13:47.034 { 00:13:47.034 "name": null, 00:13:47.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.034 "is_configured": false, 00:13:47.034 "data_offset": 0, 00:13:47.034 "data_size": 63488 00:13:47.034 }, 00:13:47.034 { 00:13:47.034 "name": null, 00:13:47.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.034 "is_configured": false, 00:13:47.034 "data_offset": 2048, 00:13:47.034 "data_size": 63488 00:13:47.034 }, 00:13:47.034 { 00:13:47.034 "name": "BaseBdev3", 00:13:47.034 "uuid": "82d885b0-7693-5228-bd86-effddc425f44", 00:13:47.034 "is_configured": true, 00:13:47.034 "data_offset": 2048, 00:13:47.034 "data_size": 63488 00:13:47.034 }, 00:13:47.034 { 00:13:47.034 "name": "BaseBdev4", 00:13:47.034 "uuid": "b9cad4e2-3e59-550b-97a7-2b90751cc18a", 00:13:47.034 "is_configured": true, 00:13:47.034 "data_offset": 2048, 00:13:47.034 "data_size": 63488 00:13:47.034 } 00:13:47.034 ] 00:13:47.034 }' 00:13:47.035 09:51:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.035 09:51:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:47.035 09:51:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.295 09:51:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:47.295 09:51:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:47.295 09:51:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:47.295 09:51:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:47.295 09:51:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:47.295 09:51:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.295 09:51:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:47.295 09:51:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.295 09:51:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:47.295 09:51:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.295 09:51:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.295 [2024-12-06 09:51:12.343213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:47.295 [2024-12-06 09:51:12.343486] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:47.295 [2024-12-06 09:51:12.343555] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:47.295 request: 00:13:47.295 { 00:13:47.295 "base_bdev": "BaseBdev1", 00:13:47.295 "raid_bdev": "raid_bdev1", 00:13:47.295 "method": "bdev_raid_add_base_bdev", 00:13:47.295 "req_id": 1 00:13:47.295 } 00:13:47.295 Got JSON-RPC error response 00:13:47.295 response: 00:13:47.295 { 00:13:47.295 "code": -22, 00:13:47.295 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:47.295 } 00:13:47.295 09:51:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:47.295 09:51:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:47.295 09:51:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:47.295 09:51:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:47.295 09:51:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:47.295 09:51:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:48.235 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:48.235 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.235 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.235 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.235 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.235 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:48.235 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.235 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.235 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.235 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.235 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.235 09:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.235 09:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.235 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.235 09:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.235 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.235 "name": "raid_bdev1", 00:13:48.235 "uuid": "fcbbe2b6-cc7e-47f0-a4f5-2f8ea4f9d816", 00:13:48.235 "strip_size_kb": 0, 00:13:48.235 "state": "online", 00:13:48.235 "raid_level": "raid1", 00:13:48.235 "superblock": true, 00:13:48.235 "num_base_bdevs": 4, 00:13:48.235 "num_base_bdevs_discovered": 2, 00:13:48.235 "num_base_bdevs_operational": 2, 00:13:48.235 "base_bdevs_list": [ 00:13:48.235 { 00:13:48.235 "name": null, 00:13:48.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.235 "is_configured": false, 00:13:48.235 "data_offset": 0, 00:13:48.235 "data_size": 63488 00:13:48.235 }, 00:13:48.235 { 00:13:48.235 "name": null, 00:13:48.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.235 "is_configured": false, 00:13:48.235 "data_offset": 2048, 00:13:48.235 "data_size": 63488 00:13:48.235 }, 00:13:48.235 { 00:13:48.235 "name": "BaseBdev3", 00:13:48.235 "uuid": "82d885b0-7693-5228-bd86-effddc425f44", 00:13:48.235 "is_configured": true, 00:13:48.235 "data_offset": 2048, 00:13:48.235 "data_size": 63488 00:13:48.235 }, 00:13:48.235 { 00:13:48.235 "name": "BaseBdev4", 00:13:48.235 "uuid": "b9cad4e2-3e59-550b-97a7-2b90751cc18a", 00:13:48.235 "is_configured": true, 00:13:48.235 "data_offset": 2048, 00:13:48.235 "data_size": 63488 00:13:48.235 } 00:13:48.235 ] 00:13:48.235 }' 00:13:48.235 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.235 09:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.805 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:48.805 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.805 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:48.805 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:48.805 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.805 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.805 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.805 09:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.805 09:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.805 09:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.805 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.805 "name": "raid_bdev1", 00:13:48.805 "uuid": "fcbbe2b6-cc7e-47f0-a4f5-2f8ea4f9d816", 00:13:48.805 "strip_size_kb": 0, 00:13:48.805 "state": "online", 00:13:48.805 "raid_level": "raid1", 00:13:48.805 "superblock": true, 00:13:48.805 "num_base_bdevs": 4, 00:13:48.805 "num_base_bdevs_discovered": 2, 00:13:48.805 "num_base_bdevs_operational": 2, 00:13:48.805 "base_bdevs_list": [ 00:13:48.805 { 00:13:48.805 "name": null, 00:13:48.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.805 "is_configured": false, 00:13:48.805 "data_offset": 0, 00:13:48.805 "data_size": 63488 00:13:48.805 }, 00:13:48.805 { 00:13:48.805 "name": null, 00:13:48.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.805 "is_configured": false, 00:13:48.805 "data_offset": 2048, 00:13:48.805 "data_size": 63488 00:13:48.805 }, 00:13:48.805 { 00:13:48.805 "name": "BaseBdev3", 00:13:48.805 "uuid": "82d885b0-7693-5228-bd86-effddc425f44", 00:13:48.805 "is_configured": true, 00:13:48.805 "data_offset": 2048, 00:13:48.805 "data_size": 63488 00:13:48.805 }, 00:13:48.805 { 00:13:48.805 "name": "BaseBdev4", 00:13:48.805 "uuid": "b9cad4e2-3e59-550b-97a7-2b90751cc18a", 00:13:48.805 "is_configured": true, 00:13:48.805 "data_offset": 2048, 00:13:48.805 "data_size": 63488 00:13:48.805 } 00:13:48.805 ] 00:13:48.805 }' 00:13:48.805 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.805 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:48.805 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.805 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:48.805 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 77879 00:13:48.805 09:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77879 ']' 00:13:48.805 09:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 77879 00:13:48.805 09:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:48.805 09:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:48.805 09:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77879 00:13:48.805 09:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:48.805 killing process with pid 77879 00:13:48.805 Received shutdown signal, test time was about 60.000000 seconds 00:13:48.805 00:13:48.805 Latency(us) 00:13:48.805 [2024-12-06T09:51:14.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.805 [2024-12-06T09:51:14.078Z] =================================================================================================================== 00:13:48.805 [2024-12-06T09:51:14.078Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:48.805 09:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:48.805 09:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77879' 00:13:48.805 09:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 77879 00:13:48.805 [2024-12-06 09:51:13.980487] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:48.805 [2024-12-06 09:51:13.980609] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:48.805 09:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 77879 00:13:48.805 [2024-12-06 09:51:13.980679] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:48.805 [2024-12-06 09:51:13.980689] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:49.376 [2024-12-06 09:51:14.452453] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:50.313 09:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:50.313 00:13:50.313 real 0m24.920s 00:13:50.313 user 0m30.401s 00:13:50.313 sys 0m3.685s 00:13:50.313 09:51:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:50.313 09:51:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.313 ************************************ 00:13:50.313 END TEST raid_rebuild_test_sb 00:13:50.313 ************************************ 00:13:50.573 09:51:15 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:13:50.573 09:51:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:50.573 09:51:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:50.573 09:51:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:50.573 ************************************ 00:13:50.573 START TEST raid_rebuild_test_io 00:13:50.573 ************************************ 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78634 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78634 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78634 ']' 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:50.573 09:51:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.573 [2024-12-06 09:51:15.745511] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:13:50.573 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:50.573 Zero copy mechanism will not be used. 00:13:50.573 [2024-12-06 09:51:15.745726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78634 ] 00:13:50.831 [2024-12-06 09:51:15.899195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.832 [2024-12-06 09:51:16.010513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.090 [2024-12-06 09:51:16.205081] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.090 [2024-12-06 09:51:16.205182] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.347 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:51.347 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:51.347 09:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:51.347 09:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:51.347 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.347 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.606 BaseBdev1_malloc 00:13:51.606 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.606 09:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:51.606 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.606 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.606 [2024-12-06 09:51:16.624886] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:51.607 [2024-12-06 09:51:16.624949] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.607 [2024-12-06 09:51:16.624971] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:51.607 [2024-12-06 09:51:16.624982] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.607 [2024-12-06 09:51:16.627041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.607 [2024-12-06 09:51:16.627083] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:51.607 BaseBdev1 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.607 BaseBdev2_malloc 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.607 [2024-12-06 09:51:16.678203] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:51.607 [2024-12-06 09:51:16.678329] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.607 [2024-12-06 09:51:16.678360] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:51.607 [2024-12-06 09:51:16.678371] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.607 [2024-12-06 09:51:16.680446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.607 [2024-12-06 09:51:16.680488] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:51.607 BaseBdev2 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.607 BaseBdev3_malloc 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.607 [2024-12-06 09:51:16.747472] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:51.607 [2024-12-06 09:51:16.747533] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.607 [2024-12-06 09:51:16.747558] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:51.607 [2024-12-06 09:51:16.747568] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.607 [2024-12-06 09:51:16.749935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.607 [2024-12-06 09:51:16.749983] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:51.607 BaseBdev3 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.607 BaseBdev4_malloc 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.607 [2024-12-06 09:51:16.801501] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:51.607 [2024-12-06 09:51:16.801583] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.607 [2024-12-06 09:51:16.801604] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:51.607 [2024-12-06 09:51:16.801614] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.607 [2024-12-06 09:51:16.803639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.607 [2024-12-06 09:51:16.803683] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:51.607 BaseBdev4 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.607 spare_malloc 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.607 spare_delay 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.607 [2024-12-06 09:51:16.869198] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:51.607 [2024-12-06 09:51:16.869319] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.607 [2024-12-06 09:51:16.869342] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:51.607 [2024-12-06 09:51:16.869353] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.607 [2024-12-06 09:51:16.871404] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.607 [2024-12-06 09:51:16.871445] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:51.607 spare 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.607 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.867 [2024-12-06 09:51:16.881218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:51.867 [2024-12-06 09:51:16.882947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:51.867 [2024-12-06 09:51:16.883010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:51.867 [2024-12-06 09:51:16.883059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:51.867 [2024-12-06 09:51:16.883133] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:51.867 [2024-12-06 09:51:16.883158] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:51.867 [2024-12-06 09:51:16.883406] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:51.867 [2024-12-06 09:51:16.883581] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:51.867 [2024-12-06 09:51:16.883595] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:51.867 [2024-12-06 09:51:16.883747] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:51.867 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.867 09:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:51.867 09:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:51.867 09:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.867 09:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.867 09:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.867 09:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:51.867 09:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.867 09:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.867 09:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.867 09:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.867 09:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.867 09:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.867 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.867 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.867 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.867 09:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.867 "name": "raid_bdev1", 00:13:51.867 "uuid": "c6786548-1508-4185-a258-4458848a15d7", 00:13:51.867 "strip_size_kb": 0, 00:13:51.867 "state": "online", 00:13:51.867 "raid_level": "raid1", 00:13:51.867 "superblock": false, 00:13:51.867 "num_base_bdevs": 4, 00:13:51.867 "num_base_bdevs_discovered": 4, 00:13:51.867 "num_base_bdevs_operational": 4, 00:13:51.867 "base_bdevs_list": [ 00:13:51.867 { 00:13:51.867 "name": "BaseBdev1", 00:13:51.867 "uuid": "5b036fbe-be14-51f2-807e-212df290c7b5", 00:13:51.867 "is_configured": true, 00:13:51.867 "data_offset": 0, 00:13:51.867 "data_size": 65536 00:13:51.867 }, 00:13:51.867 { 00:13:51.867 "name": "BaseBdev2", 00:13:51.867 "uuid": "6a25db1b-159b-5f0b-a04f-f926b1e86a13", 00:13:51.867 "is_configured": true, 00:13:51.867 "data_offset": 0, 00:13:51.867 "data_size": 65536 00:13:51.867 }, 00:13:51.867 { 00:13:51.867 "name": "BaseBdev3", 00:13:51.867 "uuid": "ea67f7a7-0baa-5fca-8095-e8c95c38e903", 00:13:51.867 "is_configured": true, 00:13:51.867 "data_offset": 0, 00:13:51.867 "data_size": 65536 00:13:51.867 }, 00:13:51.867 { 00:13:51.867 "name": "BaseBdev4", 00:13:51.867 "uuid": "b4e9cb08-26a5-587f-9324-cc6037dfe113", 00:13:51.867 "is_configured": true, 00:13:51.867 "data_offset": 0, 00:13:51.867 "data_size": 65536 00:13:51.867 } 00:13:51.867 ] 00:13:51.867 }' 00:13:51.867 09:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.867 09:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.125 09:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:52.125 09:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:52.125 09:51:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.125 09:51:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.125 [2024-12-06 09:51:17.360763] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:52.125 09:51:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.394 09:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:52.394 09:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.394 09:51:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.394 09:51:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.394 09:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:52.394 09:51:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.394 09:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:52.394 09:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:52.394 09:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:52.394 09:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:52.394 09:51:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.394 09:51:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.394 [2024-12-06 09:51:17.460247] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:52.394 09:51:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.394 09:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:52.394 09:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.394 09:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.394 09:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.394 09:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.394 09:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:52.394 09:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.394 09:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.394 09:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.394 09:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.394 09:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.394 09:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.394 09:51:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.394 09:51:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.394 09:51:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.394 09:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.394 "name": "raid_bdev1", 00:13:52.394 "uuid": "c6786548-1508-4185-a258-4458848a15d7", 00:13:52.394 "strip_size_kb": 0, 00:13:52.394 "state": "online", 00:13:52.394 "raid_level": "raid1", 00:13:52.394 "superblock": false, 00:13:52.394 "num_base_bdevs": 4, 00:13:52.394 "num_base_bdevs_discovered": 3, 00:13:52.394 "num_base_bdevs_operational": 3, 00:13:52.394 "base_bdevs_list": [ 00:13:52.394 { 00:13:52.394 "name": null, 00:13:52.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.394 "is_configured": false, 00:13:52.394 "data_offset": 0, 00:13:52.394 "data_size": 65536 00:13:52.394 }, 00:13:52.394 { 00:13:52.394 "name": "BaseBdev2", 00:13:52.394 "uuid": "6a25db1b-159b-5f0b-a04f-f926b1e86a13", 00:13:52.394 "is_configured": true, 00:13:52.394 "data_offset": 0, 00:13:52.394 "data_size": 65536 00:13:52.394 }, 00:13:52.394 { 00:13:52.394 "name": "BaseBdev3", 00:13:52.394 "uuid": "ea67f7a7-0baa-5fca-8095-e8c95c38e903", 00:13:52.394 "is_configured": true, 00:13:52.394 "data_offset": 0, 00:13:52.394 "data_size": 65536 00:13:52.394 }, 00:13:52.394 { 00:13:52.394 "name": "BaseBdev4", 00:13:52.394 "uuid": "b4e9cb08-26a5-587f-9324-cc6037dfe113", 00:13:52.394 "is_configured": true, 00:13:52.394 "data_offset": 0, 00:13:52.394 "data_size": 65536 00:13:52.394 } 00:13:52.394 ] 00:13:52.394 }' 00:13:52.394 09:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.394 09:51:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.394 [2024-12-06 09:51:17.564345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:52.394 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:52.394 Zero copy mechanism will not be used. 00:13:52.394 Running I/O for 60 seconds... 00:13:52.670 09:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:52.670 09:51:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.670 09:51:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.670 [2024-12-06 09:51:17.918593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:52.930 09:51:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.930 09:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:52.930 [2024-12-06 09:51:17.975513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:13:52.930 [2024-12-06 09:51:17.977524] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:52.930 [2024-12-06 09:51:18.099401] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:52.930 [2024-12-06 09:51:18.100153] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:53.188 [2024-12-06 09:51:18.218261] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:53.188 [2024-12-06 09:51:18.218609] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:53.447 188.00 IOPS, 564.00 MiB/s [2024-12-06T09:51:18.720Z] [2024-12-06 09:51:18.689586] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:53.706 09:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:53.706 09:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.706 09:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:53.706 09:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:53.706 09:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.706 09:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.706 09:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.706 09:51:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.706 09:51:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.966 09:51:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.966 09:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.966 "name": "raid_bdev1", 00:13:53.966 "uuid": "c6786548-1508-4185-a258-4458848a15d7", 00:13:53.966 "strip_size_kb": 0, 00:13:53.966 "state": "online", 00:13:53.966 "raid_level": "raid1", 00:13:53.966 "superblock": false, 00:13:53.966 "num_base_bdevs": 4, 00:13:53.966 "num_base_bdevs_discovered": 4, 00:13:53.966 "num_base_bdevs_operational": 4, 00:13:53.966 "process": { 00:13:53.966 "type": "rebuild", 00:13:53.966 "target": "spare", 00:13:53.966 "progress": { 00:13:53.966 "blocks": 14336, 00:13:53.966 "percent": 21 00:13:53.966 } 00:13:53.966 }, 00:13:53.966 "base_bdevs_list": [ 00:13:53.966 { 00:13:53.966 "name": "spare", 00:13:53.966 "uuid": "8e6f9c7f-5360-50af-90c6-e20987a930fb", 00:13:53.966 "is_configured": true, 00:13:53.966 "data_offset": 0, 00:13:53.966 "data_size": 65536 00:13:53.966 }, 00:13:53.966 { 00:13:53.966 "name": "BaseBdev2", 00:13:53.966 "uuid": "6a25db1b-159b-5f0b-a04f-f926b1e86a13", 00:13:53.966 "is_configured": true, 00:13:53.966 "data_offset": 0, 00:13:53.966 "data_size": 65536 00:13:53.966 }, 00:13:53.966 { 00:13:53.966 "name": "BaseBdev3", 00:13:53.966 "uuid": "ea67f7a7-0baa-5fca-8095-e8c95c38e903", 00:13:53.966 "is_configured": true, 00:13:53.966 "data_offset": 0, 00:13:53.966 "data_size": 65536 00:13:53.966 }, 00:13:53.966 { 00:13:53.966 "name": "BaseBdev4", 00:13:53.966 "uuid": "b4e9cb08-26a5-587f-9324-cc6037dfe113", 00:13:53.966 "is_configured": true, 00:13:53.966 "data_offset": 0, 00:13:53.966 "data_size": 65536 00:13:53.966 } 00:13:53.966 ] 00:13:53.966 }' 00:13:53.966 09:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.966 [2024-12-06 09:51:19.033489] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:53.967 09:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:53.967 09:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.967 09:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:53.967 09:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:53.967 09:51:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.967 09:51:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.967 [2024-12-06 09:51:19.125395] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:54.225 [2024-12-06 09:51:19.242380] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:54.225 [2024-12-06 09:51:19.251808] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.225 [2024-12-06 09:51:19.251879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:54.225 [2024-12-06 09:51:19.251895] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:54.225 [2024-12-06 09:51:19.282889] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:13:54.225 09:51:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.225 09:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:54.225 09:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.225 09:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.225 09:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.225 09:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.225 09:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:54.225 09:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.225 09:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.225 09:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.225 09:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.225 09:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.225 09:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.225 09:51:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.225 09:51:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.225 09:51:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.225 09:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.225 "name": "raid_bdev1", 00:13:54.225 "uuid": "c6786548-1508-4185-a258-4458848a15d7", 00:13:54.225 "strip_size_kb": 0, 00:13:54.225 "state": "online", 00:13:54.225 "raid_level": "raid1", 00:13:54.225 "superblock": false, 00:13:54.225 "num_base_bdevs": 4, 00:13:54.225 "num_base_bdevs_discovered": 3, 00:13:54.225 "num_base_bdevs_operational": 3, 00:13:54.225 "base_bdevs_list": [ 00:13:54.225 { 00:13:54.225 "name": null, 00:13:54.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.225 "is_configured": false, 00:13:54.225 "data_offset": 0, 00:13:54.225 "data_size": 65536 00:13:54.225 }, 00:13:54.225 { 00:13:54.225 "name": "BaseBdev2", 00:13:54.225 "uuid": "6a25db1b-159b-5f0b-a04f-f926b1e86a13", 00:13:54.225 "is_configured": true, 00:13:54.225 "data_offset": 0, 00:13:54.225 "data_size": 65536 00:13:54.225 }, 00:13:54.225 { 00:13:54.225 "name": "BaseBdev3", 00:13:54.225 "uuid": "ea67f7a7-0baa-5fca-8095-e8c95c38e903", 00:13:54.225 "is_configured": true, 00:13:54.225 "data_offset": 0, 00:13:54.225 "data_size": 65536 00:13:54.225 }, 00:13:54.225 { 00:13:54.225 "name": "BaseBdev4", 00:13:54.225 "uuid": "b4e9cb08-26a5-587f-9324-cc6037dfe113", 00:13:54.225 "is_configured": true, 00:13:54.225 "data_offset": 0, 00:13:54.225 "data_size": 65536 00:13:54.225 } 00:13:54.225 ] 00:13:54.225 }' 00:13:54.225 09:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.225 09:51:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.484 163.50 IOPS, 490.50 MiB/s [2024-12-06T09:51:19.757Z] 09:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:54.484 09:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.484 09:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:54.484 09:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:54.484 09:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.484 09:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.484 09:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.484 09:51:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.484 09:51:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.484 09:51:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.484 09:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.484 "name": "raid_bdev1", 00:13:54.484 "uuid": "c6786548-1508-4185-a258-4458848a15d7", 00:13:54.484 "strip_size_kb": 0, 00:13:54.484 "state": "online", 00:13:54.484 "raid_level": "raid1", 00:13:54.484 "superblock": false, 00:13:54.484 "num_base_bdevs": 4, 00:13:54.484 "num_base_bdevs_discovered": 3, 00:13:54.484 "num_base_bdevs_operational": 3, 00:13:54.484 "base_bdevs_list": [ 00:13:54.484 { 00:13:54.484 "name": null, 00:13:54.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.484 "is_configured": false, 00:13:54.484 "data_offset": 0, 00:13:54.484 "data_size": 65536 00:13:54.484 }, 00:13:54.484 { 00:13:54.484 "name": "BaseBdev2", 00:13:54.484 "uuid": "6a25db1b-159b-5f0b-a04f-f926b1e86a13", 00:13:54.484 "is_configured": true, 00:13:54.484 "data_offset": 0, 00:13:54.484 "data_size": 65536 00:13:54.484 }, 00:13:54.484 { 00:13:54.484 "name": "BaseBdev3", 00:13:54.484 "uuid": "ea67f7a7-0baa-5fca-8095-e8c95c38e903", 00:13:54.484 "is_configured": true, 00:13:54.484 "data_offset": 0, 00:13:54.484 "data_size": 65536 00:13:54.484 }, 00:13:54.484 { 00:13:54.484 "name": "BaseBdev4", 00:13:54.484 "uuid": "b4e9cb08-26a5-587f-9324-cc6037dfe113", 00:13:54.484 "is_configured": true, 00:13:54.484 "data_offset": 0, 00:13:54.484 "data_size": 65536 00:13:54.484 } 00:13:54.484 ] 00:13:54.484 }' 00:13:54.484 09:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.743 09:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:54.743 09:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.743 09:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:54.743 09:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:54.743 09:51:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.743 09:51:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.743 [2024-12-06 09:51:19.807271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:54.744 09:51:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.744 09:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:54.744 [2024-12-06 09:51:19.861338] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:54.744 [2024-12-06 09:51:19.863238] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:54.744 [2024-12-06 09:51:20.008184] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:55.002 [2024-12-06 09:51:20.239992] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:55.002 [2024-12-06 09:51:20.240333] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:55.571 171.00 IOPS, 513.00 MiB/s [2024-12-06T09:51:20.844Z] [2024-12-06 09:51:20.598185] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:55.571 [2024-12-06 09:51:20.714462] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:55.830 09:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:55.830 09:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.830 09:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:55.830 09:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:55.830 09:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.830 09:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.830 09:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.830 09:51:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.830 09:51:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.830 09:51:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.830 09:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.830 "name": "raid_bdev1", 00:13:55.830 "uuid": "c6786548-1508-4185-a258-4458848a15d7", 00:13:55.830 "strip_size_kb": 0, 00:13:55.830 "state": "online", 00:13:55.830 "raid_level": "raid1", 00:13:55.830 "superblock": false, 00:13:55.830 "num_base_bdevs": 4, 00:13:55.830 "num_base_bdevs_discovered": 4, 00:13:55.830 "num_base_bdevs_operational": 4, 00:13:55.830 "process": { 00:13:55.830 "type": "rebuild", 00:13:55.830 "target": "spare", 00:13:55.830 "progress": { 00:13:55.830 "blocks": 12288, 00:13:55.830 "percent": 18 00:13:55.830 } 00:13:55.830 }, 00:13:55.830 "base_bdevs_list": [ 00:13:55.830 { 00:13:55.830 "name": "spare", 00:13:55.830 "uuid": "8e6f9c7f-5360-50af-90c6-e20987a930fb", 00:13:55.830 "is_configured": true, 00:13:55.830 "data_offset": 0, 00:13:55.830 "data_size": 65536 00:13:55.830 }, 00:13:55.830 { 00:13:55.830 "name": "BaseBdev2", 00:13:55.831 "uuid": "6a25db1b-159b-5f0b-a04f-f926b1e86a13", 00:13:55.831 "is_configured": true, 00:13:55.831 "data_offset": 0, 00:13:55.831 "data_size": 65536 00:13:55.831 }, 00:13:55.831 { 00:13:55.831 "name": "BaseBdev3", 00:13:55.831 "uuid": "ea67f7a7-0baa-5fca-8095-e8c95c38e903", 00:13:55.831 "is_configured": true, 00:13:55.831 "data_offset": 0, 00:13:55.831 "data_size": 65536 00:13:55.831 }, 00:13:55.831 { 00:13:55.831 "name": "BaseBdev4", 00:13:55.831 "uuid": "b4e9cb08-26a5-587f-9324-cc6037dfe113", 00:13:55.831 "is_configured": true, 00:13:55.831 "data_offset": 0, 00:13:55.831 "data_size": 65536 00:13:55.831 } 00:13:55.831 ] 00:13:55.831 }' 00:13:55.831 09:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.831 [2024-12-06 09:51:20.938705] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:55.831 09:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:55.831 09:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.831 09:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:55.831 09:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:55.831 09:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:55.831 09:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:55.831 09:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:55.831 09:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:55.831 09:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.831 09:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.831 [2024-12-06 09:51:21.008428] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:56.091 [2024-12-06 09:51:21.175914] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:13:56.091 [2024-12-06 09:51:21.176044] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:13:56.091 09:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.091 09:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:56.091 09:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:56.091 09:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:56.091 09:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.091 09:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:56.091 09:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:56.091 09:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.091 09:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.091 09:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.091 09:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.091 09:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.091 09:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.091 09:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.091 "name": "raid_bdev1", 00:13:56.091 "uuid": "c6786548-1508-4185-a258-4458848a15d7", 00:13:56.091 "strip_size_kb": 0, 00:13:56.091 "state": "online", 00:13:56.091 "raid_level": "raid1", 00:13:56.091 "superblock": false, 00:13:56.091 "num_base_bdevs": 4, 00:13:56.091 "num_base_bdevs_discovered": 3, 00:13:56.091 "num_base_bdevs_operational": 3, 00:13:56.091 "process": { 00:13:56.091 "type": "rebuild", 00:13:56.091 "target": "spare", 00:13:56.091 "progress": { 00:13:56.091 "blocks": 16384, 00:13:56.091 "percent": 25 00:13:56.091 } 00:13:56.091 }, 00:13:56.091 "base_bdevs_list": [ 00:13:56.091 { 00:13:56.091 "name": "spare", 00:13:56.091 "uuid": "8e6f9c7f-5360-50af-90c6-e20987a930fb", 00:13:56.091 "is_configured": true, 00:13:56.091 "data_offset": 0, 00:13:56.091 "data_size": 65536 00:13:56.091 }, 00:13:56.091 { 00:13:56.091 "name": null, 00:13:56.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.091 "is_configured": false, 00:13:56.091 "data_offset": 0, 00:13:56.091 "data_size": 65536 00:13:56.091 }, 00:13:56.091 { 00:13:56.091 "name": "BaseBdev3", 00:13:56.091 "uuid": "ea67f7a7-0baa-5fca-8095-e8c95c38e903", 00:13:56.091 "is_configured": true, 00:13:56.091 "data_offset": 0, 00:13:56.091 "data_size": 65536 00:13:56.091 }, 00:13:56.091 { 00:13:56.091 "name": "BaseBdev4", 00:13:56.091 "uuid": "b4e9cb08-26a5-587f-9324-cc6037dfe113", 00:13:56.091 "is_configured": true, 00:13:56.091 "data_offset": 0, 00:13:56.091 "data_size": 65536 00:13:56.091 } 00:13:56.091 ] 00:13:56.091 }' 00:13:56.091 09:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.091 09:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:56.091 09:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.091 09:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:56.091 09:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=475 00:13:56.091 09:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:56.091 09:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:56.091 09:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.091 09:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:56.091 09:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:56.091 09:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.091 09:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.091 09:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.091 09:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.091 09:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.091 09:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.351 09:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.351 "name": "raid_bdev1", 00:13:56.351 "uuid": "c6786548-1508-4185-a258-4458848a15d7", 00:13:56.351 "strip_size_kb": 0, 00:13:56.351 "state": "online", 00:13:56.351 "raid_level": "raid1", 00:13:56.351 "superblock": false, 00:13:56.351 "num_base_bdevs": 4, 00:13:56.351 "num_base_bdevs_discovered": 3, 00:13:56.351 "num_base_bdevs_operational": 3, 00:13:56.351 "process": { 00:13:56.351 "type": "rebuild", 00:13:56.351 "target": "spare", 00:13:56.351 "progress": { 00:13:56.351 "blocks": 18432, 00:13:56.351 "percent": 28 00:13:56.351 } 00:13:56.351 }, 00:13:56.351 "base_bdevs_list": [ 00:13:56.351 { 00:13:56.351 "name": "spare", 00:13:56.351 "uuid": "8e6f9c7f-5360-50af-90c6-e20987a930fb", 00:13:56.351 "is_configured": true, 00:13:56.351 "data_offset": 0, 00:13:56.351 "data_size": 65536 00:13:56.351 }, 00:13:56.351 { 00:13:56.351 "name": null, 00:13:56.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.351 "is_configured": false, 00:13:56.351 "data_offset": 0, 00:13:56.351 "data_size": 65536 00:13:56.351 }, 00:13:56.351 { 00:13:56.351 "name": "BaseBdev3", 00:13:56.351 "uuid": "ea67f7a7-0baa-5fca-8095-e8c95c38e903", 00:13:56.351 "is_configured": true, 00:13:56.351 "data_offset": 0, 00:13:56.351 "data_size": 65536 00:13:56.351 }, 00:13:56.351 { 00:13:56.351 "name": "BaseBdev4", 00:13:56.375 "uuid": "b4e9cb08-26a5-587f-9324-cc6037dfe113", 00:13:56.375 "is_configured": true, 00:13:56.375 "data_offset": 0, 00:13:56.375 "data_size": 65536 00:13:56.375 } 00:13:56.375 ] 00:13:56.375 }' 00:13:56.375 09:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.375 09:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:56.375 09:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.375 09:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:56.375 09:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:56.375 [2024-12-06 09:51:21.552696] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:56.375 [2024-12-06 09:51:21.553322] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:56.636 150.75 IOPS, 452.25 MiB/s [2024-12-06T09:51:21.909Z] [2024-12-06 09:51:21.892791] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:56.896 [2024-12-06 09:51:22.004707] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:56.896 [2024-12-06 09:51:22.005425] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:57.466 09:51:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:57.466 09:51:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:57.466 09:51:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.466 09:51:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:57.466 09:51:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:57.466 09:51:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.466 09:51:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.466 09:51:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.466 09:51:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.466 09:51:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.466 09:51:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.466 09:51:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.466 "name": "raid_bdev1", 00:13:57.466 "uuid": "c6786548-1508-4185-a258-4458848a15d7", 00:13:57.466 "strip_size_kb": 0, 00:13:57.466 "state": "online", 00:13:57.466 "raid_level": "raid1", 00:13:57.466 "superblock": false, 00:13:57.466 "num_base_bdevs": 4, 00:13:57.466 "num_base_bdevs_discovered": 3, 00:13:57.466 "num_base_bdevs_operational": 3, 00:13:57.466 "process": { 00:13:57.466 "type": "rebuild", 00:13:57.466 "target": "spare", 00:13:57.466 "progress": { 00:13:57.466 "blocks": 34816, 00:13:57.466 "percent": 53 00:13:57.466 } 00:13:57.466 }, 00:13:57.466 "base_bdevs_list": [ 00:13:57.466 { 00:13:57.466 "name": "spare", 00:13:57.466 "uuid": "8e6f9c7f-5360-50af-90c6-e20987a930fb", 00:13:57.466 "is_configured": true, 00:13:57.466 "data_offset": 0, 00:13:57.466 "data_size": 65536 00:13:57.466 }, 00:13:57.466 { 00:13:57.466 "name": null, 00:13:57.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.466 "is_configured": false, 00:13:57.466 "data_offset": 0, 00:13:57.466 "data_size": 65536 00:13:57.466 }, 00:13:57.466 { 00:13:57.466 "name": "BaseBdev3", 00:13:57.466 "uuid": "ea67f7a7-0baa-5fca-8095-e8c95c38e903", 00:13:57.466 "is_configured": true, 00:13:57.466 "data_offset": 0, 00:13:57.466 "data_size": 65536 00:13:57.466 }, 00:13:57.466 { 00:13:57.466 "name": "BaseBdev4", 00:13:57.466 "uuid": "b4e9cb08-26a5-587f-9324-cc6037dfe113", 00:13:57.466 "is_configured": true, 00:13:57.466 "data_offset": 0, 00:13:57.466 "data_size": 65536 00:13:57.466 } 00:13:57.466 ] 00:13:57.466 }' 00:13:57.466 09:51:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:57.466 09:51:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:57.466 09:51:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.466 130.40 IOPS, 391.20 MiB/s [2024-12-06T09:51:22.739Z] 09:51:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:57.466 09:51:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:57.466 [2024-12-06 09:51:22.620607] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:57.726 [2024-12-06 09:51:22.838445] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:58.296 [2024-12-06 09:51:23.277957] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:58.557 115.17 IOPS, 345.50 MiB/s [2024-12-06T09:51:23.830Z] [2024-12-06 09:51:23.603286] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:58.557 09:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:58.557 09:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:58.557 09:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.557 09:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:58.557 09:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:58.557 09:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.557 09:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.557 09:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.557 09:51:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.557 09:51:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.557 09:51:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.557 09:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.557 "name": "raid_bdev1", 00:13:58.557 "uuid": "c6786548-1508-4185-a258-4458848a15d7", 00:13:58.557 "strip_size_kb": 0, 00:13:58.557 "state": "online", 00:13:58.557 "raid_level": "raid1", 00:13:58.557 "superblock": false, 00:13:58.557 "num_base_bdevs": 4, 00:13:58.557 "num_base_bdevs_discovered": 3, 00:13:58.557 "num_base_bdevs_operational": 3, 00:13:58.557 "process": { 00:13:58.557 "type": "rebuild", 00:13:58.557 "target": "spare", 00:13:58.557 "progress": { 00:13:58.557 "blocks": 53248, 00:13:58.557 "percent": 81 00:13:58.557 } 00:13:58.557 }, 00:13:58.557 "base_bdevs_list": [ 00:13:58.557 { 00:13:58.557 "name": "spare", 00:13:58.557 "uuid": "8e6f9c7f-5360-50af-90c6-e20987a930fb", 00:13:58.557 "is_configured": true, 00:13:58.557 "data_offset": 0, 00:13:58.557 "data_size": 65536 00:13:58.557 }, 00:13:58.557 { 00:13:58.557 "name": null, 00:13:58.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.557 "is_configured": false, 00:13:58.557 "data_offset": 0, 00:13:58.557 "data_size": 65536 00:13:58.557 }, 00:13:58.557 { 00:13:58.557 "name": "BaseBdev3", 00:13:58.557 "uuid": "ea67f7a7-0baa-5fca-8095-e8c95c38e903", 00:13:58.557 "is_configured": true, 00:13:58.557 "data_offset": 0, 00:13:58.557 "data_size": 65536 00:13:58.557 }, 00:13:58.557 { 00:13:58.557 "name": "BaseBdev4", 00:13:58.557 "uuid": "b4e9cb08-26a5-587f-9324-cc6037dfe113", 00:13:58.557 "is_configured": true, 00:13:58.557 "data_offset": 0, 00:13:58.557 "data_size": 65536 00:13:58.557 } 00:13:58.557 ] 00:13:58.557 }' 00:13:58.557 09:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.557 09:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:58.557 09:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.557 09:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:58.557 09:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:59.127 [2024-12-06 09:51:24.381376] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:59.387 [2024-12-06 09:51:24.481240] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:59.387 [2024-12-06 09:51:24.490333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.647 102.43 IOPS, 307.29 MiB/s [2024-12-06T09:51:24.920Z] 09:51:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:59.647 09:51:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:59.647 09:51:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.647 09:51:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:59.647 09:51:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:59.647 09:51:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.647 09:51:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.647 09:51:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.647 09:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.647 09:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.647 09:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.647 09:51:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.647 "name": "raid_bdev1", 00:13:59.647 "uuid": "c6786548-1508-4185-a258-4458848a15d7", 00:13:59.647 "strip_size_kb": 0, 00:13:59.647 "state": "online", 00:13:59.647 "raid_level": "raid1", 00:13:59.647 "superblock": false, 00:13:59.647 "num_base_bdevs": 4, 00:13:59.647 "num_base_bdevs_discovered": 3, 00:13:59.647 "num_base_bdevs_operational": 3, 00:13:59.647 "base_bdevs_list": [ 00:13:59.647 { 00:13:59.647 "name": "spare", 00:13:59.647 "uuid": "8e6f9c7f-5360-50af-90c6-e20987a930fb", 00:13:59.647 "is_configured": true, 00:13:59.647 "data_offset": 0, 00:13:59.647 "data_size": 65536 00:13:59.647 }, 00:13:59.647 { 00:13:59.647 "name": null, 00:13:59.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.647 "is_configured": false, 00:13:59.647 "data_offset": 0, 00:13:59.647 "data_size": 65536 00:13:59.647 }, 00:13:59.647 { 00:13:59.647 "name": "BaseBdev3", 00:13:59.647 "uuid": "ea67f7a7-0baa-5fca-8095-e8c95c38e903", 00:13:59.647 "is_configured": true, 00:13:59.647 "data_offset": 0, 00:13:59.647 "data_size": 65536 00:13:59.647 }, 00:13:59.647 { 00:13:59.647 "name": "BaseBdev4", 00:13:59.647 "uuid": "b4e9cb08-26a5-587f-9324-cc6037dfe113", 00:13:59.647 "is_configured": true, 00:13:59.647 "data_offset": 0, 00:13:59.647 "data_size": 65536 00:13:59.647 } 00:13:59.647 ] 00:13:59.647 }' 00:13:59.647 09:51:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.647 09:51:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:59.647 09:51:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.647 09:51:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:59.647 09:51:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:59.647 09:51:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:59.647 09:51:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.647 09:51:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:59.647 09:51:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:59.647 09:51:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.647 09:51:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.647 09:51:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.647 09:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.647 09:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.917 09:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.917 09:51:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.917 "name": "raid_bdev1", 00:13:59.917 "uuid": "c6786548-1508-4185-a258-4458848a15d7", 00:13:59.917 "strip_size_kb": 0, 00:13:59.917 "state": "online", 00:13:59.917 "raid_level": "raid1", 00:13:59.917 "superblock": false, 00:13:59.917 "num_base_bdevs": 4, 00:13:59.917 "num_base_bdevs_discovered": 3, 00:13:59.917 "num_base_bdevs_operational": 3, 00:13:59.917 "base_bdevs_list": [ 00:13:59.917 { 00:13:59.917 "name": "spare", 00:13:59.917 "uuid": "8e6f9c7f-5360-50af-90c6-e20987a930fb", 00:13:59.917 "is_configured": true, 00:13:59.917 "data_offset": 0, 00:13:59.917 "data_size": 65536 00:13:59.917 }, 00:13:59.917 { 00:13:59.917 "name": null, 00:13:59.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.917 "is_configured": false, 00:13:59.917 "data_offset": 0, 00:13:59.917 "data_size": 65536 00:13:59.917 }, 00:13:59.917 { 00:13:59.917 "name": "BaseBdev3", 00:13:59.917 "uuid": "ea67f7a7-0baa-5fca-8095-e8c95c38e903", 00:13:59.917 "is_configured": true, 00:13:59.917 "data_offset": 0, 00:13:59.917 "data_size": 65536 00:13:59.917 }, 00:13:59.917 { 00:13:59.917 "name": "BaseBdev4", 00:13:59.917 "uuid": "b4e9cb08-26a5-587f-9324-cc6037dfe113", 00:13:59.917 "is_configured": true, 00:13:59.917 "data_offset": 0, 00:13:59.917 "data_size": 65536 00:13:59.917 } 00:13:59.917 ] 00:13:59.917 }' 00:13:59.917 09:51:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.917 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:59.917 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.917 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:59.917 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:59.917 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.917 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.917 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:59.917 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:59.917 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:59.917 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.917 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.917 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.917 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.917 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.917 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.917 09:51:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.917 09:51:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.917 09:51:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.917 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.917 "name": "raid_bdev1", 00:13:59.917 "uuid": "c6786548-1508-4185-a258-4458848a15d7", 00:13:59.917 "strip_size_kb": 0, 00:13:59.917 "state": "online", 00:13:59.917 "raid_level": "raid1", 00:13:59.917 "superblock": false, 00:13:59.917 "num_base_bdevs": 4, 00:13:59.917 "num_base_bdevs_discovered": 3, 00:13:59.917 "num_base_bdevs_operational": 3, 00:13:59.917 "base_bdevs_list": [ 00:13:59.917 { 00:13:59.917 "name": "spare", 00:13:59.917 "uuid": "8e6f9c7f-5360-50af-90c6-e20987a930fb", 00:13:59.917 "is_configured": true, 00:13:59.917 "data_offset": 0, 00:13:59.917 "data_size": 65536 00:13:59.917 }, 00:13:59.917 { 00:13:59.917 "name": null, 00:13:59.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.917 "is_configured": false, 00:13:59.917 "data_offset": 0, 00:13:59.917 "data_size": 65536 00:13:59.917 }, 00:13:59.917 { 00:13:59.917 "name": "BaseBdev3", 00:13:59.917 "uuid": "ea67f7a7-0baa-5fca-8095-e8c95c38e903", 00:13:59.917 "is_configured": true, 00:13:59.917 "data_offset": 0, 00:13:59.917 "data_size": 65536 00:13:59.917 }, 00:13:59.917 { 00:13:59.917 "name": "BaseBdev4", 00:13:59.917 "uuid": "b4e9cb08-26a5-587f-9324-cc6037dfe113", 00:13:59.917 "is_configured": true, 00:13:59.917 "data_offset": 0, 00:13:59.917 "data_size": 65536 00:13:59.917 } 00:13:59.917 ] 00:13:59.917 }' 00:13:59.917 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.917 09:51:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.486 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:00.486 09:51:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.486 09:51:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.486 [2024-12-06 09:51:25.468305] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:00.486 [2024-12-06 09:51:25.468393] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:00.486 93.75 IOPS, 281.25 MiB/s 00:14:00.486 Latency(us) 00:14:00.486 [2024-12-06T09:51:25.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.486 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:00.486 raid_bdev1 : 8.02 93.67 281.02 0.00 0.00 15573.19 336.27 111726.00 00:14:00.486 [2024-12-06T09:51:25.759Z] =================================================================================================================== 00:14:00.486 [2024-12-06T09:51:25.759Z] Total : 93.67 281.02 0.00 0.00 15573.19 336.27 111726.00 00:14:00.486 [2024-12-06 09:51:25.589598] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:00.486 [2024-12-06 09:51:25.589716] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.486 [2024-12-06 09:51:25.589845] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:00.486 [2024-12-06 09:51:25.589899] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:00.486 { 00:14:00.486 "results": [ 00:14:00.486 { 00:14:00.487 "job": "raid_bdev1", 00:14:00.487 "core_mask": "0x1", 00:14:00.487 "workload": "randrw", 00:14:00.487 "percentage": 50, 00:14:00.487 "status": "finished", 00:14:00.487 "queue_depth": 2, 00:14:00.487 "io_size": 3145728, 00:14:00.487 "runtime": 8.01721, 00:14:00.487 "iops": 93.67348491557537, 00:14:00.487 "mibps": 281.0204547467261, 00:14:00.487 "io_failed": 0, 00:14:00.487 "io_timeout": 0, 00:14:00.487 "avg_latency_us": 15573.1859168852, 00:14:00.487 "min_latency_us": 336.2655021834061, 00:14:00.487 "max_latency_us": 111726.00174672488 00:14:00.487 } 00:14:00.487 ], 00:14:00.487 "core_count": 1 00:14:00.487 } 00:14:00.487 09:51:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.487 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.487 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:00.487 09:51:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.487 09:51:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.487 09:51:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.487 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:00.487 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:00.487 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:00.487 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:00.487 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:00.487 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:00.487 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:00.487 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:00.487 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:00.487 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:00.487 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:00.487 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:00.487 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:00.747 /dev/nbd0 00:14:00.747 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:00.747 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:00.747 09:51:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:00.747 09:51:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:00.747 09:51:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:00.747 09:51:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:00.747 09:51:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:00.747 09:51:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:00.747 09:51:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:00.747 09:51:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:00.747 09:51:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:00.747 1+0 records in 00:14:00.747 1+0 records out 00:14:00.747 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000548186 s, 7.5 MB/s 00:14:00.747 09:51:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.747 09:51:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:00.747 09:51:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.747 09:51:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:00.747 09:51:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:00.747 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:00.747 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:00.747 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:00.747 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:00.747 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:00.747 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:00.747 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:00.748 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:00.748 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:00.748 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:00.748 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:00.748 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:00.748 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:00.748 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:00.748 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:00.748 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:00.748 09:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:01.007 /dev/nbd1 00:14:01.007 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:01.007 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:01.007 09:51:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:01.007 09:51:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:01.007 09:51:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:01.007 09:51:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:01.007 09:51:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:01.007 09:51:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:01.007 09:51:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:01.007 09:51:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:01.007 09:51:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:01.007 1+0 records in 00:14:01.007 1+0 records out 00:14:01.007 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263261 s, 15.6 MB/s 00:14:01.007 09:51:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.007 09:51:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:01.007 09:51:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.007 09:51:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:01.007 09:51:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:01.007 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:01.007 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:01.007 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:01.266 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:01.266 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:01.266 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:01.266 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:01.266 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:01.266 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.266 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:01.267 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:01.267 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:01.267 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:01.267 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.267 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.267 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:01.267 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:01.267 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.267 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:01.267 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:01.267 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:01.267 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:01.267 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:01.267 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:01.267 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:01.527 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:01.527 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:01.527 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:01.527 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:01.527 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:01.527 /dev/nbd1 00:14:01.527 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:01.527 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:01.527 09:51:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:01.527 09:51:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:01.527 09:51:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:01.527 09:51:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:01.527 09:51:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:01.527 09:51:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:01.527 09:51:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:01.527 09:51:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:01.527 09:51:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:01.527 1+0 records in 00:14:01.527 1+0 records out 00:14:01.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379045 s, 10.8 MB/s 00:14:01.527 09:51:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.527 09:51:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:01.527 09:51:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.527 09:51:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:01.527 09:51:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:01.527 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:01.527 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:01.527 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:01.787 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:01.787 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:01.787 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:01.787 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:01.787 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:01.787 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.787 09:51:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:02.054 09:51:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:02.054 09:51:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:02.054 09:51:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:02.054 09:51:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:02.054 09:51:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:02.054 09:51:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:02.054 09:51:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:02.054 09:51:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:02.054 09:51:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:02.054 09:51:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:02.054 09:51:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:02.054 09:51:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:02.054 09:51:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:02.054 09:51:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:02.054 09:51:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:02.054 09:51:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:02.054 09:51:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:02.054 09:51:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:02.054 09:51:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:02.054 09:51:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:02.054 09:51:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:02.054 09:51:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:02.054 09:51:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:02.054 09:51:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:02.054 09:51:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78634 00:14:02.054 09:51:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78634 ']' 00:14:02.054 09:51:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78634 00:14:02.054 09:51:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:14:02.325 09:51:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:02.325 09:51:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78634 00:14:02.325 09:51:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:02.325 killing process with pid 78634 00:14:02.325 Received shutdown signal, test time was about 9.811400 seconds 00:14:02.325 00:14:02.325 Latency(us) 00:14:02.325 [2024-12-06T09:51:27.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:02.325 [2024-12-06T09:51:27.598Z] =================================================================================================================== 00:14:02.325 [2024-12-06T09:51:27.598Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:02.325 09:51:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:02.325 09:51:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78634' 00:14:02.325 09:51:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78634 00:14:02.325 [2024-12-06 09:51:27.358980] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:02.325 09:51:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78634 00:14:02.585 [2024-12-06 09:51:27.771806] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:03.966 09:51:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:03.966 00:14:03.966 real 0m13.316s 00:14:03.966 user 0m16.751s 00:14:03.966 sys 0m1.796s 00:14:03.966 09:51:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:03.966 09:51:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.966 ************************************ 00:14:03.966 END TEST raid_rebuild_test_io 00:14:03.966 ************************************ 00:14:03.966 09:51:29 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:03.966 09:51:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:03.966 09:51:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:03.966 09:51:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:03.966 ************************************ 00:14:03.966 START TEST raid_rebuild_test_sb_io 00:14:03.966 ************************************ 00:14:03.966 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:14:03.966 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:03.966 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:03.966 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:03.966 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:03.966 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:03.966 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:03.966 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:03.966 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:03.967 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:03.967 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:03.967 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:03.967 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:03.967 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:03.967 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:03.967 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:03.967 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:03.967 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:03.967 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:03.967 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:03.967 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:03.967 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:03.967 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:03.967 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:03.967 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:03.967 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:03.967 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:03.967 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:03.967 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:03.967 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:03.967 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:03.967 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79044 00:14:03.967 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:03.967 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79044 00:14:03.967 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79044 ']' 00:14:03.967 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.967 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:03.967 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.967 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:03.967 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.967 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:03.967 Zero copy mechanism will not be used. 00:14:03.967 [2024-12-06 09:51:29.131237] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:14:03.967 [2024-12-06 09:51:29.131453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79044 ] 00:14:04.227 [2024-12-06 09:51:29.303606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.227 [2024-12-06 09:51:29.414389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.486 [2024-12-06 09:51:29.610405] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:04.486 [2024-12-06 09:51:29.610500] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:04.746 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:04.746 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:14:04.746 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:04.746 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:04.746 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.746 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.746 BaseBdev1_malloc 00:14:04.746 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.746 09:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:04.746 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.746 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.746 [2024-12-06 09:51:30.007410] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:04.746 [2024-12-06 09:51:30.007473] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.747 [2024-12-06 09:51:30.007491] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:04.747 [2024-12-06 09:51:30.007502] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.747 [2024-12-06 09:51:30.009551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.747 [2024-12-06 09:51:30.009592] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:04.747 BaseBdev1 00:14:04.747 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.747 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:04.747 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:04.747 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.747 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.006 BaseBdev2_malloc 00:14:05.006 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.007 [2024-12-06 09:51:30.064917] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:05.007 [2024-12-06 09:51:30.064999] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.007 [2024-12-06 09:51:30.065021] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:05.007 [2024-12-06 09:51:30.065045] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.007 [2024-12-06 09:51:30.067169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.007 [2024-12-06 09:51:30.067203] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:05.007 BaseBdev2 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.007 BaseBdev3_malloc 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.007 [2024-12-06 09:51:30.132927] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:05.007 [2024-12-06 09:51:30.132987] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.007 [2024-12-06 09:51:30.133007] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:05.007 [2024-12-06 09:51:30.133018] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.007 [2024-12-06 09:51:30.135327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.007 [2024-12-06 09:51:30.135384] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:05.007 BaseBdev3 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.007 BaseBdev4_malloc 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.007 [2024-12-06 09:51:30.191105] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:05.007 [2024-12-06 09:51:30.191189] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.007 [2024-12-06 09:51:30.191217] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:05.007 [2024-12-06 09:51:30.191229] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.007 [2024-12-06 09:51:30.193369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.007 [2024-12-06 09:51:30.193456] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:05.007 BaseBdev4 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.007 spare_malloc 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.007 spare_delay 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.007 [2024-12-06 09:51:30.258613] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:05.007 [2024-12-06 09:51:30.258713] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.007 [2024-12-06 09:51:30.258736] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:05.007 [2024-12-06 09:51:30.258747] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.007 [2024-12-06 09:51:30.260836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.007 [2024-12-06 09:51:30.260878] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:05.007 spare 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.007 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.007 [2024-12-06 09:51:30.270638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:05.007 [2024-12-06 09:51:30.272384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:05.007 [2024-12-06 09:51:30.272444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:05.007 [2024-12-06 09:51:30.272493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:05.007 [2024-12-06 09:51:30.272664] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:05.007 [2024-12-06 09:51:30.272679] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:05.007 [2024-12-06 09:51:30.272917] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:05.007 [2024-12-06 09:51:30.273092] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:05.007 [2024-12-06 09:51:30.273101] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:05.007 [2024-12-06 09:51:30.273272] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.267 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.267 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:05.267 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.267 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.267 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.267 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.267 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:05.267 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.267 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.267 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.267 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.267 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.267 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.267 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.267 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.267 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.267 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.267 "name": "raid_bdev1", 00:14:05.267 "uuid": "49d35cd1-787d-4ce1-88dc-a7d84819eeb1", 00:14:05.267 "strip_size_kb": 0, 00:14:05.267 "state": "online", 00:14:05.267 "raid_level": "raid1", 00:14:05.267 "superblock": true, 00:14:05.267 "num_base_bdevs": 4, 00:14:05.267 "num_base_bdevs_discovered": 4, 00:14:05.267 "num_base_bdevs_operational": 4, 00:14:05.267 "base_bdevs_list": [ 00:14:05.267 { 00:14:05.267 "name": "BaseBdev1", 00:14:05.267 "uuid": "c7660559-096e-57ed-b4d1-eebba922dae8", 00:14:05.267 "is_configured": true, 00:14:05.267 "data_offset": 2048, 00:14:05.267 "data_size": 63488 00:14:05.267 }, 00:14:05.267 { 00:14:05.267 "name": "BaseBdev2", 00:14:05.267 "uuid": "a5e78b3b-2f12-5c93-9d10-a3caff89377d", 00:14:05.267 "is_configured": true, 00:14:05.267 "data_offset": 2048, 00:14:05.267 "data_size": 63488 00:14:05.267 }, 00:14:05.267 { 00:14:05.267 "name": "BaseBdev3", 00:14:05.267 "uuid": "f7b921e5-6047-5bd8-be46-a4aa57c339e5", 00:14:05.267 "is_configured": true, 00:14:05.267 "data_offset": 2048, 00:14:05.267 "data_size": 63488 00:14:05.267 }, 00:14:05.267 { 00:14:05.267 "name": "BaseBdev4", 00:14:05.267 "uuid": "9d7911be-dd64-5986-a543-d6e4e99fd056", 00:14:05.267 "is_configured": true, 00:14:05.267 "data_offset": 2048, 00:14:05.267 "data_size": 63488 00:14:05.267 } 00:14:05.267 ] 00:14:05.267 }' 00:14:05.267 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.267 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.527 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:05.527 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:05.527 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.527 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.527 [2024-12-06 09:51:30.758172] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:05.527 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.787 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:05.787 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:05.787 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.787 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.787 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.787 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.787 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:05.787 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:05.787 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:05.787 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:05.787 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.787 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.787 [2024-12-06 09:51:30.837669] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:05.787 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.787 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:05.787 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.787 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.787 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.787 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.787 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.787 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.787 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.787 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.787 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.787 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.787 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.787 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.787 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.787 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.787 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.787 "name": "raid_bdev1", 00:14:05.787 "uuid": "49d35cd1-787d-4ce1-88dc-a7d84819eeb1", 00:14:05.787 "strip_size_kb": 0, 00:14:05.787 "state": "online", 00:14:05.787 "raid_level": "raid1", 00:14:05.787 "superblock": true, 00:14:05.787 "num_base_bdevs": 4, 00:14:05.787 "num_base_bdevs_discovered": 3, 00:14:05.787 "num_base_bdevs_operational": 3, 00:14:05.787 "base_bdevs_list": [ 00:14:05.787 { 00:14:05.787 "name": null, 00:14:05.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.787 "is_configured": false, 00:14:05.787 "data_offset": 0, 00:14:05.787 "data_size": 63488 00:14:05.787 }, 00:14:05.787 { 00:14:05.787 "name": "BaseBdev2", 00:14:05.787 "uuid": "a5e78b3b-2f12-5c93-9d10-a3caff89377d", 00:14:05.787 "is_configured": true, 00:14:05.787 "data_offset": 2048, 00:14:05.787 "data_size": 63488 00:14:05.787 }, 00:14:05.787 { 00:14:05.787 "name": "BaseBdev3", 00:14:05.787 "uuid": "f7b921e5-6047-5bd8-be46-a4aa57c339e5", 00:14:05.787 "is_configured": true, 00:14:05.787 "data_offset": 2048, 00:14:05.787 "data_size": 63488 00:14:05.787 }, 00:14:05.787 { 00:14:05.787 "name": "BaseBdev4", 00:14:05.787 "uuid": "9d7911be-dd64-5986-a543-d6e4e99fd056", 00:14:05.787 "is_configured": true, 00:14:05.787 "data_offset": 2048, 00:14:05.787 "data_size": 63488 00:14:05.787 } 00:14:05.787 ] 00:14:05.787 }' 00:14:05.787 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.787 09:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.787 [2024-12-06 09:51:30.933491] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:05.787 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:05.787 Zero copy mechanism will not be used. 00:14:05.787 Running I/O for 60 seconds... 00:14:06.047 09:51:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:06.047 09:51:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.047 09:51:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.047 [2024-12-06 09:51:31.304724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:06.306 09:51:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.306 09:51:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:06.307 [2024-12-06 09:51:31.368733] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:06.307 [2024-12-06 09:51:31.370717] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:06.566 [2024-12-06 09:51:31.617987] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:06.566 [2024-12-06 09:51:31.618796] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:06.826 200.00 IOPS, 600.00 MiB/s [2024-12-06T09:51:32.099Z] [2024-12-06 09:51:32.073551] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:07.085 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:07.085 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.085 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:07.085 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:07.085 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.085 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.085 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.085 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.085 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.346 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.346 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.346 "name": "raid_bdev1", 00:14:07.346 "uuid": "49d35cd1-787d-4ce1-88dc-a7d84819eeb1", 00:14:07.346 "strip_size_kb": 0, 00:14:07.346 "state": "online", 00:14:07.346 "raid_level": "raid1", 00:14:07.346 "superblock": true, 00:14:07.346 "num_base_bdevs": 4, 00:14:07.346 "num_base_bdevs_discovered": 4, 00:14:07.346 "num_base_bdevs_operational": 4, 00:14:07.346 "process": { 00:14:07.346 "type": "rebuild", 00:14:07.346 "target": "spare", 00:14:07.346 "progress": { 00:14:07.346 "blocks": 12288, 00:14:07.346 "percent": 19 00:14:07.346 } 00:14:07.346 }, 00:14:07.346 "base_bdevs_list": [ 00:14:07.346 { 00:14:07.346 "name": "spare", 00:14:07.346 "uuid": "e17cb8b7-605c-5449-8b92-dc5bd51c7284", 00:14:07.346 "is_configured": true, 00:14:07.346 "data_offset": 2048, 00:14:07.346 "data_size": 63488 00:14:07.346 }, 00:14:07.346 { 00:14:07.346 "name": "BaseBdev2", 00:14:07.346 "uuid": "a5e78b3b-2f12-5c93-9d10-a3caff89377d", 00:14:07.346 "is_configured": true, 00:14:07.346 "data_offset": 2048, 00:14:07.346 "data_size": 63488 00:14:07.346 }, 00:14:07.346 { 00:14:07.346 "name": "BaseBdev3", 00:14:07.346 "uuid": "f7b921e5-6047-5bd8-be46-a4aa57c339e5", 00:14:07.346 "is_configured": true, 00:14:07.346 "data_offset": 2048, 00:14:07.346 "data_size": 63488 00:14:07.346 }, 00:14:07.346 { 00:14:07.346 "name": "BaseBdev4", 00:14:07.346 "uuid": "9d7911be-dd64-5986-a543-d6e4e99fd056", 00:14:07.346 "is_configured": true, 00:14:07.346 "data_offset": 2048, 00:14:07.346 "data_size": 63488 00:14:07.346 } 00:14:07.346 ] 00:14:07.346 }' 00:14:07.346 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.346 [2024-12-06 09:51:32.397535] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:07.346 [2024-12-06 09:51:32.398030] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:07.346 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:07.346 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.346 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:07.346 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:07.346 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.346 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.346 [2024-12-06 09:51:32.508839] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:07.346 [2024-12-06 09:51:32.516290] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:07.606 [2024-12-06 09:51:32.624788] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:07.606 [2024-12-06 09:51:32.636149] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.606 [2024-12-06 09:51:32.636244] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:07.606 [2024-12-06 09:51:32.636264] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:07.606 [2024-12-06 09:51:32.671438] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:07.606 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.606 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:07.606 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.607 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.607 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:07.607 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:07.607 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:07.607 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.607 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.607 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.607 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.607 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.607 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.607 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.607 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.607 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.607 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.607 "name": "raid_bdev1", 00:14:07.607 "uuid": "49d35cd1-787d-4ce1-88dc-a7d84819eeb1", 00:14:07.607 "strip_size_kb": 0, 00:14:07.607 "state": "online", 00:14:07.607 "raid_level": "raid1", 00:14:07.607 "superblock": true, 00:14:07.607 "num_base_bdevs": 4, 00:14:07.607 "num_base_bdevs_discovered": 3, 00:14:07.607 "num_base_bdevs_operational": 3, 00:14:07.607 "base_bdevs_list": [ 00:14:07.607 { 00:14:07.607 "name": null, 00:14:07.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.607 "is_configured": false, 00:14:07.607 "data_offset": 0, 00:14:07.607 "data_size": 63488 00:14:07.607 }, 00:14:07.607 { 00:14:07.607 "name": "BaseBdev2", 00:14:07.607 "uuid": "a5e78b3b-2f12-5c93-9d10-a3caff89377d", 00:14:07.607 "is_configured": true, 00:14:07.607 "data_offset": 2048, 00:14:07.607 "data_size": 63488 00:14:07.607 }, 00:14:07.607 { 00:14:07.607 "name": "BaseBdev3", 00:14:07.607 "uuid": "f7b921e5-6047-5bd8-be46-a4aa57c339e5", 00:14:07.607 "is_configured": true, 00:14:07.607 "data_offset": 2048, 00:14:07.607 "data_size": 63488 00:14:07.607 }, 00:14:07.607 { 00:14:07.607 "name": "BaseBdev4", 00:14:07.607 "uuid": "9d7911be-dd64-5986-a543-d6e4e99fd056", 00:14:07.607 "is_configured": true, 00:14:07.607 "data_offset": 2048, 00:14:07.607 "data_size": 63488 00:14:07.607 } 00:14:07.607 ] 00:14:07.607 }' 00:14:07.607 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.607 09:51:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.126 163.50 IOPS, 490.50 MiB/s [2024-12-06T09:51:33.399Z] 09:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:08.126 09:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.126 09:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:08.126 09:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:08.126 09:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.126 09:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.126 09:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.126 09:51:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.126 09:51:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.126 09:51:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.126 09:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.126 "name": "raid_bdev1", 00:14:08.126 "uuid": "49d35cd1-787d-4ce1-88dc-a7d84819eeb1", 00:14:08.126 "strip_size_kb": 0, 00:14:08.126 "state": "online", 00:14:08.126 "raid_level": "raid1", 00:14:08.126 "superblock": true, 00:14:08.126 "num_base_bdevs": 4, 00:14:08.126 "num_base_bdevs_discovered": 3, 00:14:08.126 "num_base_bdevs_operational": 3, 00:14:08.126 "base_bdevs_list": [ 00:14:08.126 { 00:14:08.126 "name": null, 00:14:08.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.126 "is_configured": false, 00:14:08.126 "data_offset": 0, 00:14:08.126 "data_size": 63488 00:14:08.126 }, 00:14:08.126 { 00:14:08.126 "name": "BaseBdev2", 00:14:08.126 "uuid": "a5e78b3b-2f12-5c93-9d10-a3caff89377d", 00:14:08.126 "is_configured": true, 00:14:08.126 "data_offset": 2048, 00:14:08.126 "data_size": 63488 00:14:08.126 }, 00:14:08.126 { 00:14:08.126 "name": "BaseBdev3", 00:14:08.126 "uuid": "f7b921e5-6047-5bd8-be46-a4aa57c339e5", 00:14:08.126 "is_configured": true, 00:14:08.126 "data_offset": 2048, 00:14:08.126 "data_size": 63488 00:14:08.126 }, 00:14:08.126 { 00:14:08.126 "name": "BaseBdev4", 00:14:08.126 "uuid": "9d7911be-dd64-5986-a543-d6e4e99fd056", 00:14:08.126 "is_configured": true, 00:14:08.126 "data_offset": 2048, 00:14:08.126 "data_size": 63488 00:14:08.126 } 00:14:08.126 ] 00:14:08.126 }' 00:14:08.126 09:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.126 09:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:08.126 09:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.126 09:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:08.126 09:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:08.126 09:51:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.126 09:51:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.126 [2024-12-06 09:51:33.278084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:08.126 09:51:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.126 09:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:08.126 [2024-12-06 09:51:33.340267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:08.126 [2024-12-06 09:51:33.342271] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:08.386 [2024-12-06 09:51:33.450431] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:08.386 [2024-12-06 09:51:33.451029] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:08.386 [2024-12-06 09:51:33.575517] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:08.386 [2024-12-06 09:51:33.575914] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:08.646 [2024-12-06 09:51:33.913279] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:08.905 167.00 IOPS, 501.00 MiB/s [2024-12-06T09:51:34.178Z] [2024-12-06 09:51:34.050733] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:08.905 [2024-12-06 09:51:34.051609] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:09.165 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:09.165 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.165 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:09.165 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:09.165 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.165 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.165 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.165 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.165 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.165 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.165 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.165 "name": "raid_bdev1", 00:14:09.165 "uuid": "49d35cd1-787d-4ce1-88dc-a7d84819eeb1", 00:14:09.165 "strip_size_kb": 0, 00:14:09.165 "state": "online", 00:14:09.165 "raid_level": "raid1", 00:14:09.165 "superblock": true, 00:14:09.165 "num_base_bdevs": 4, 00:14:09.165 "num_base_bdevs_discovered": 4, 00:14:09.165 "num_base_bdevs_operational": 4, 00:14:09.165 "process": { 00:14:09.165 "type": "rebuild", 00:14:09.165 "target": "spare", 00:14:09.165 "progress": { 00:14:09.165 "blocks": 12288, 00:14:09.165 "percent": 19 00:14:09.165 } 00:14:09.165 }, 00:14:09.165 "base_bdevs_list": [ 00:14:09.165 { 00:14:09.165 "name": "spare", 00:14:09.165 "uuid": "e17cb8b7-605c-5449-8b92-dc5bd51c7284", 00:14:09.165 "is_configured": true, 00:14:09.165 "data_offset": 2048, 00:14:09.165 "data_size": 63488 00:14:09.165 }, 00:14:09.165 { 00:14:09.165 "name": "BaseBdev2", 00:14:09.165 "uuid": "a5e78b3b-2f12-5c93-9d10-a3caff89377d", 00:14:09.165 "is_configured": true, 00:14:09.166 "data_offset": 2048, 00:14:09.166 "data_size": 63488 00:14:09.166 }, 00:14:09.166 { 00:14:09.166 "name": "BaseBdev3", 00:14:09.166 "uuid": "f7b921e5-6047-5bd8-be46-a4aa57c339e5", 00:14:09.166 "is_configured": true, 00:14:09.166 "data_offset": 2048, 00:14:09.166 "data_size": 63488 00:14:09.166 }, 00:14:09.166 { 00:14:09.166 "name": "BaseBdev4", 00:14:09.166 "uuid": "9d7911be-dd64-5986-a543-d6e4e99fd056", 00:14:09.166 "is_configured": true, 00:14:09.166 "data_offset": 2048, 00:14:09.166 "data_size": 63488 00:14:09.166 } 00:14:09.166 ] 00:14:09.166 }' 00:14:09.166 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.166 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:09.166 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.426 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.426 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:09.426 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:09.426 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:09.426 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:09.426 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:09.426 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:09.426 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:09.426 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.426 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.426 [2024-12-06 09:51:34.477367] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:09.426 [2024-12-06 09:51:34.677223] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:09.426 [2024-12-06 09:51:34.677321] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:09.426 [2024-12-06 09:51:34.679101] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:09.426 [2024-12-06 09:51:34.686039] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:09.426 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.426 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:09.426 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:09.426 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:09.426 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.426 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:09.426 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:09.426 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.426 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.426 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.426 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.426 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.686 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.686 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.686 "name": "raid_bdev1", 00:14:09.686 "uuid": "49d35cd1-787d-4ce1-88dc-a7d84819eeb1", 00:14:09.686 "strip_size_kb": 0, 00:14:09.686 "state": "online", 00:14:09.686 "raid_level": "raid1", 00:14:09.686 "superblock": true, 00:14:09.686 "num_base_bdevs": 4, 00:14:09.686 "num_base_bdevs_discovered": 3, 00:14:09.686 "num_base_bdevs_operational": 3, 00:14:09.686 "process": { 00:14:09.686 "type": "rebuild", 00:14:09.686 "target": "spare", 00:14:09.686 "progress": { 00:14:09.686 "blocks": 16384, 00:14:09.686 "percent": 25 00:14:09.686 } 00:14:09.686 }, 00:14:09.687 "base_bdevs_list": [ 00:14:09.687 { 00:14:09.687 "name": "spare", 00:14:09.687 "uuid": "e17cb8b7-605c-5449-8b92-dc5bd51c7284", 00:14:09.687 "is_configured": true, 00:14:09.687 "data_offset": 2048, 00:14:09.687 "data_size": 63488 00:14:09.687 }, 00:14:09.687 { 00:14:09.687 "name": null, 00:14:09.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.687 "is_configured": false, 00:14:09.687 "data_offset": 0, 00:14:09.687 "data_size": 63488 00:14:09.687 }, 00:14:09.687 { 00:14:09.687 "name": "BaseBdev3", 00:14:09.687 "uuid": "f7b921e5-6047-5bd8-be46-a4aa57c339e5", 00:14:09.687 "is_configured": true, 00:14:09.687 "data_offset": 2048, 00:14:09.687 "data_size": 63488 00:14:09.687 }, 00:14:09.687 { 00:14:09.687 "name": "BaseBdev4", 00:14:09.687 "uuid": "9d7911be-dd64-5986-a543-d6e4e99fd056", 00:14:09.687 "is_configured": true, 00:14:09.687 "data_offset": 2048, 00:14:09.687 "data_size": 63488 00:14:09.687 } 00:14:09.687 ] 00:14:09.687 }' 00:14:09.687 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.687 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:09.687 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.687 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.687 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=488 00:14:09.687 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:09.687 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:09.687 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.687 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:09.687 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:09.687 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.687 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.687 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.687 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.687 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.687 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.687 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.687 "name": "raid_bdev1", 00:14:09.687 "uuid": "49d35cd1-787d-4ce1-88dc-a7d84819eeb1", 00:14:09.687 "strip_size_kb": 0, 00:14:09.687 "state": "online", 00:14:09.687 "raid_level": "raid1", 00:14:09.687 "superblock": true, 00:14:09.687 "num_base_bdevs": 4, 00:14:09.687 "num_base_bdevs_discovered": 3, 00:14:09.687 "num_base_bdevs_operational": 3, 00:14:09.687 "process": { 00:14:09.687 "type": "rebuild", 00:14:09.687 "target": "spare", 00:14:09.687 "progress": { 00:14:09.687 "blocks": 16384, 00:14:09.687 "percent": 25 00:14:09.687 } 00:14:09.687 }, 00:14:09.687 "base_bdevs_list": [ 00:14:09.687 { 00:14:09.687 "name": "spare", 00:14:09.687 "uuid": "e17cb8b7-605c-5449-8b92-dc5bd51c7284", 00:14:09.687 "is_configured": true, 00:14:09.687 "data_offset": 2048, 00:14:09.687 "data_size": 63488 00:14:09.687 }, 00:14:09.687 { 00:14:09.687 "name": null, 00:14:09.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.687 "is_configured": false, 00:14:09.687 "data_offset": 0, 00:14:09.687 "data_size": 63488 00:14:09.687 }, 00:14:09.687 { 00:14:09.687 "name": "BaseBdev3", 00:14:09.687 "uuid": "f7b921e5-6047-5bd8-be46-a4aa57c339e5", 00:14:09.687 "is_configured": true, 00:14:09.687 "data_offset": 2048, 00:14:09.687 "data_size": 63488 00:14:09.687 }, 00:14:09.687 { 00:14:09.687 "name": "BaseBdev4", 00:14:09.687 "uuid": "9d7911be-dd64-5986-a543-d6e4e99fd056", 00:14:09.687 "is_configured": true, 00:14:09.687 "data_offset": 2048, 00:14:09.687 "data_size": 63488 00:14:09.687 } 00:14:09.687 ] 00:14:09.687 }' 00:14:09.687 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.687 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:09.687 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.687 143.25 IOPS, 429.75 MiB/s [2024-12-06T09:51:34.960Z] 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.687 09:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:09.947 [2024-12-06 09:51:35.025279] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:09.947 [2024-12-06 09:51:35.025983] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:09.947 [2024-12-06 09:51:35.130475] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:10.207 [2024-12-06 09:51:35.353776] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:10.776 [2024-12-06 09:51:35.807439] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:10.776 128.60 IOPS, 385.80 MiB/s [2024-12-06T09:51:36.049Z] 09:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:10.776 09:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:10.776 09:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.776 09:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:10.776 09:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:10.776 09:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.776 09:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.776 09:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.776 09:51:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.776 09:51:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.776 09:51:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.776 09:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.776 "name": "raid_bdev1", 00:14:10.776 "uuid": "49d35cd1-787d-4ce1-88dc-a7d84819eeb1", 00:14:10.776 "strip_size_kb": 0, 00:14:10.776 "state": "online", 00:14:10.776 "raid_level": "raid1", 00:14:10.776 "superblock": true, 00:14:10.776 "num_base_bdevs": 4, 00:14:10.776 "num_base_bdevs_discovered": 3, 00:14:10.776 "num_base_bdevs_operational": 3, 00:14:10.776 "process": { 00:14:10.776 "type": "rebuild", 00:14:10.776 "target": "spare", 00:14:10.776 "progress": { 00:14:10.776 "blocks": 36864, 00:14:10.776 "percent": 58 00:14:10.776 } 00:14:10.776 }, 00:14:10.776 "base_bdevs_list": [ 00:14:10.776 { 00:14:10.776 "name": "spare", 00:14:10.776 "uuid": "e17cb8b7-605c-5449-8b92-dc5bd51c7284", 00:14:10.776 "is_configured": true, 00:14:10.776 "data_offset": 2048, 00:14:10.776 "data_size": 63488 00:14:10.776 }, 00:14:10.776 { 00:14:10.776 "name": null, 00:14:10.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.776 "is_configured": false, 00:14:10.776 "data_offset": 0, 00:14:10.776 "data_size": 63488 00:14:10.776 }, 00:14:10.776 { 00:14:10.776 "name": "BaseBdev3", 00:14:10.776 "uuid": "f7b921e5-6047-5bd8-be46-a4aa57c339e5", 00:14:10.776 "is_configured": true, 00:14:10.776 "data_offset": 2048, 00:14:10.776 "data_size": 63488 00:14:10.776 }, 00:14:10.776 { 00:14:10.776 "name": "BaseBdev4", 00:14:10.776 "uuid": "9d7911be-dd64-5986-a543-d6e4e99fd056", 00:14:10.776 "is_configured": true, 00:14:10.776 "data_offset": 2048, 00:14:10.776 "data_size": 63488 00:14:10.776 } 00:14:10.776 ] 00:14:10.776 }' 00:14:10.776 09:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.776 [2024-12-06 09:51:36.040683] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:10.776 [2024-12-06 09:51:36.041602] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:11.035 09:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:11.035 09:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.036 09:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:11.036 09:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:11.036 [2024-12-06 09:51:36.271730] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:11.605 [2024-12-06 09:51:36.715150] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:11.865 112.50 IOPS, 337.50 MiB/s [2024-12-06T09:51:37.138Z] 09:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:11.865 09:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:11.865 09:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.865 09:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:11.865 09:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:11.865 09:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.865 09:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.865 09:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.865 09:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.865 09:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.124 09:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.124 09:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.124 "name": "raid_bdev1", 00:14:12.124 "uuid": "49d35cd1-787d-4ce1-88dc-a7d84819eeb1", 00:14:12.124 "strip_size_kb": 0, 00:14:12.124 "state": "online", 00:14:12.124 "raid_level": "raid1", 00:14:12.124 "superblock": true, 00:14:12.124 "num_base_bdevs": 4, 00:14:12.124 "num_base_bdevs_discovered": 3, 00:14:12.124 "num_base_bdevs_operational": 3, 00:14:12.124 "process": { 00:14:12.124 "type": "rebuild", 00:14:12.125 "target": "spare", 00:14:12.125 "progress": { 00:14:12.125 "blocks": 53248, 00:14:12.125 "percent": 83 00:14:12.125 } 00:14:12.125 }, 00:14:12.125 "base_bdevs_list": [ 00:14:12.125 { 00:14:12.125 "name": "spare", 00:14:12.125 "uuid": "e17cb8b7-605c-5449-8b92-dc5bd51c7284", 00:14:12.125 "is_configured": true, 00:14:12.125 "data_offset": 2048, 00:14:12.125 "data_size": 63488 00:14:12.125 }, 00:14:12.125 { 00:14:12.125 "name": null, 00:14:12.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.125 "is_configured": false, 00:14:12.125 "data_offset": 0, 00:14:12.125 "data_size": 63488 00:14:12.125 }, 00:14:12.125 { 00:14:12.125 "name": "BaseBdev3", 00:14:12.125 "uuid": "f7b921e5-6047-5bd8-be46-a4aa57c339e5", 00:14:12.125 "is_configured": true, 00:14:12.125 "data_offset": 2048, 00:14:12.125 "data_size": 63488 00:14:12.125 }, 00:14:12.125 { 00:14:12.125 "name": "BaseBdev4", 00:14:12.125 "uuid": "9d7911be-dd64-5986-a543-d6e4e99fd056", 00:14:12.125 "is_configured": true, 00:14:12.125 "data_offset": 2048, 00:14:12.125 "data_size": 63488 00:14:12.125 } 00:14:12.125 ] 00:14:12.125 }' 00:14:12.125 09:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.125 09:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:12.125 09:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.125 09:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:12.125 09:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:12.125 [2024-12-06 09:51:37.270393] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:12.384 [2024-12-06 09:51:37.602888] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:12.644 [2024-12-06 09:51:37.702744] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:12.644 [2024-12-06 09:51:37.705842] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.164 100.86 IOPS, 302.57 MiB/s [2024-12-06T09:51:38.437Z] 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:13.164 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:13.164 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.164 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:13.164 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:13.164 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.164 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.164 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.164 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.164 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.164 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.164 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.164 "name": "raid_bdev1", 00:14:13.164 "uuid": "49d35cd1-787d-4ce1-88dc-a7d84819eeb1", 00:14:13.164 "strip_size_kb": 0, 00:14:13.164 "state": "online", 00:14:13.164 "raid_level": "raid1", 00:14:13.164 "superblock": true, 00:14:13.164 "num_base_bdevs": 4, 00:14:13.164 "num_base_bdevs_discovered": 3, 00:14:13.164 "num_base_bdevs_operational": 3, 00:14:13.164 "base_bdevs_list": [ 00:14:13.164 { 00:14:13.164 "name": "spare", 00:14:13.164 "uuid": "e17cb8b7-605c-5449-8b92-dc5bd51c7284", 00:14:13.164 "is_configured": true, 00:14:13.164 "data_offset": 2048, 00:14:13.164 "data_size": 63488 00:14:13.164 }, 00:14:13.164 { 00:14:13.164 "name": null, 00:14:13.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.164 "is_configured": false, 00:14:13.164 "data_offset": 0, 00:14:13.164 "data_size": 63488 00:14:13.164 }, 00:14:13.164 { 00:14:13.164 "name": "BaseBdev3", 00:14:13.164 "uuid": "f7b921e5-6047-5bd8-be46-a4aa57c339e5", 00:14:13.164 "is_configured": true, 00:14:13.164 "data_offset": 2048, 00:14:13.164 "data_size": 63488 00:14:13.164 }, 00:14:13.164 { 00:14:13.164 "name": "BaseBdev4", 00:14:13.164 "uuid": "9d7911be-dd64-5986-a543-d6e4e99fd056", 00:14:13.164 "is_configured": true, 00:14:13.164 "data_offset": 2048, 00:14:13.164 "data_size": 63488 00:14:13.164 } 00:14:13.164 ] 00:14:13.164 }' 00:14:13.164 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.164 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:13.164 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.164 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:13.164 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:13.164 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:13.164 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.164 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:13.164 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:13.164 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.164 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.164 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.164 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.164 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.164 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.425 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.425 "name": "raid_bdev1", 00:14:13.425 "uuid": "49d35cd1-787d-4ce1-88dc-a7d84819eeb1", 00:14:13.425 "strip_size_kb": 0, 00:14:13.425 "state": "online", 00:14:13.425 "raid_level": "raid1", 00:14:13.425 "superblock": true, 00:14:13.425 "num_base_bdevs": 4, 00:14:13.425 "num_base_bdevs_discovered": 3, 00:14:13.425 "num_base_bdevs_operational": 3, 00:14:13.425 "base_bdevs_list": [ 00:14:13.425 { 00:14:13.425 "name": "spare", 00:14:13.425 "uuid": "e17cb8b7-605c-5449-8b92-dc5bd51c7284", 00:14:13.425 "is_configured": true, 00:14:13.425 "data_offset": 2048, 00:14:13.425 "data_size": 63488 00:14:13.425 }, 00:14:13.425 { 00:14:13.425 "name": null, 00:14:13.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.425 "is_configured": false, 00:14:13.425 "data_offset": 0, 00:14:13.425 "data_size": 63488 00:14:13.425 }, 00:14:13.425 { 00:14:13.425 "name": "BaseBdev3", 00:14:13.425 "uuid": "f7b921e5-6047-5bd8-be46-a4aa57c339e5", 00:14:13.425 "is_configured": true, 00:14:13.425 "data_offset": 2048, 00:14:13.425 "data_size": 63488 00:14:13.425 }, 00:14:13.425 { 00:14:13.425 "name": "BaseBdev4", 00:14:13.425 "uuid": "9d7911be-dd64-5986-a543-d6e4e99fd056", 00:14:13.425 "is_configured": true, 00:14:13.425 "data_offset": 2048, 00:14:13.425 "data_size": 63488 00:14:13.425 } 00:14:13.425 ] 00:14:13.425 }' 00:14:13.425 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.425 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:13.425 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.425 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:13.425 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:13.425 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:13.425 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:13.425 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:13.425 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:13.425 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:13.425 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.425 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.425 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.425 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.425 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.425 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.425 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.425 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.425 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.425 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.425 "name": "raid_bdev1", 00:14:13.425 "uuid": "49d35cd1-787d-4ce1-88dc-a7d84819eeb1", 00:14:13.425 "strip_size_kb": 0, 00:14:13.425 "state": "online", 00:14:13.425 "raid_level": "raid1", 00:14:13.425 "superblock": true, 00:14:13.425 "num_base_bdevs": 4, 00:14:13.425 "num_base_bdevs_discovered": 3, 00:14:13.425 "num_base_bdevs_operational": 3, 00:14:13.425 "base_bdevs_list": [ 00:14:13.425 { 00:14:13.425 "name": "spare", 00:14:13.425 "uuid": "e17cb8b7-605c-5449-8b92-dc5bd51c7284", 00:14:13.425 "is_configured": true, 00:14:13.425 "data_offset": 2048, 00:14:13.425 "data_size": 63488 00:14:13.425 }, 00:14:13.425 { 00:14:13.425 "name": null, 00:14:13.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.425 "is_configured": false, 00:14:13.425 "data_offset": 0, 00:14:13.425 "data_size": 63488 00:14:13.425 }, 00:14:13.425 { 00:14:13.425 "name": "BaseBdev3", 00:14:13.425 "uuid": "f7b921e5-6047-5bd8-be46-a4aa57c339e5", 00:14:13.425 "is_configured": true, 00:14:13.425 "data_offset": 2048, 00:14:13.425 "data_size": 63488 00:14:13.425 }, 00:14:13.425 { 00:14:13.425 "name": "BaseBdev4", 00:14:13.425 "uuid": "9d7911be-dd64-5986-a543-d6e4e99fd056", 00:14:13.425 "is_configured": true, 00:14:13.425 "data_offset": 2048, 00:14:13.425 "data_size": 63488 00:14:13.425 } 00:14:13.425 ] 00:14:13.425 }' 00:14:13.426 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.426 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.959 91.75 IOPS, 275.25 MiB/s [2024-12-06T09:51:39.232Z] 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:13.959 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.959 09:51:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.959 [2024-12-06 09:51:38.992517] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:13.959 [2024-12-06 09:51:38.992598] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:13.959 00:14:13.959 Latency(us) 00:14:13.959 [2024-12-06T09:51:39.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.960 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:13.960 raid_bdev1 : 8.19 90.38 271.15 0.00 0.00 15313.29 338.05 112183.90 00:14:13.960 [2024-12-06T09:51:39.233Z] =================================================================================================================== 00:14:13.960 [2024-12-06T09:51:39.233Z] Total : 90.38 271.15 0.00 0.00 15313.29 338.05 112183.90 00:14:13.960 [2024-12-06 09:51:39.130112] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:13.960 [2024-12-06 09:51:39.130260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.960 [2024-12-06 09:51:39.130399] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:13.960 [2024-12-06 09:51:39.130464] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:13.960 { 00:14:13.960 "results": [ 00:14:13.960 { 00:14:13.960 "job": "raid_bdev1", 00:14:13.960 "core_mask": "0x1", 00:14:13.960 "workload": "randrw", 00:14:13.960 "percentage": 50, 00:14:13.960 "status": "finished", 00:14:13.960 "queue_depth": 2, 00:14:13.960 "io_size": 3145728, 00:14:13.960 "runtime": 8.187351, 00:14:13.960 "iops": 90.38332422782412, 00:14:13.960 "mibps": 271.14997268347236, 00:14:13.960 "io_failed": 0, 00:14:13.960 "io_timeout": 0, 00:14:13.960 "avg_latency_us": 15313.289743892363, 00:14:13.960 "min_latency_us": 338.05414847161575, 00:14:13.960 "max_latency_us": 112183.89519650655 00:14:13.960 } 00:14:13.960 ], 00:14:13.960 "core_count": 1 00:14:13.960 } 00:14:13.960 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.960 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.960 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:13.960 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.960 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.960 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.960 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:13.960 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:13.960 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:13.960 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:13.960 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:13.960 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:13.960 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:13.960 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:13.960 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:13.960 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:13.960 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:13.960 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:13.960 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:14.244 /dev/nbd0 00:14:14.244 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:14.244 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:14.244 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:14.244 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:14.244 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:14.244 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:14.244 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:14.244 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:14.244 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:14.244 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:14.244 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:14.244 1+0 records in 00:14:14.244 1+0 records out 00:14:14.244 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000480789 s, 8.5 MB/s 00:14:14.244 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:14.244 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:14.244 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:14.244 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:14.244 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:14.244 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:14.244 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:14.244 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:14.244 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:14.244 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:14.244 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:14.244 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:14.244 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:14.244 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:14.244 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:14.244 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:14.244 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:14.244 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:14.244 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:14.244 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:14.244 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:14.245 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:14.504 /dev/nbd1 00:14:14.504 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:14.504 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:14.504 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:14.504 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:14.504 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:14.504 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:14.504 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:14.504 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:14.504 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:14.504 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:14.504 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:14.504 1+0 records in 00:14:14.504 1+0 records out 00:14:14.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00055198 s, 7.4 MB/s 00:14:14.504 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:14.504 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:14.504 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:14.504 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:14.504 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:14.504 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:14.504 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:14.504 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:14.763 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:14.763 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:14.763 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:14.763 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:14.763 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:14.763 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:14.763 09:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:15.023 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:15.023 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:15.023 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:15.023 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:15.023 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:15.023 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:15.023 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:15.023 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:15.023 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:15.023 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:15.023 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:15.023 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:15.023 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:15.023 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:15.023 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:15.023 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:15.023 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:15.023 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:15.023 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:15.023 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:15.023 /dev/nbd1 00:14:15.282 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:15.282 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:15.282 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:15.282 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:15.282 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:15.282 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:15.282 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:15.282 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:15.282 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:15.282 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:15.282 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:15.282 1+0 records in 00:14:15.282 1+0 records out 00:14:15.282 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054388 s, 7.5 MB/s 00:14:15.282 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.282 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:15.282 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:15.282 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:15.282 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:15.282 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:15.282 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:15.282 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:15.282 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:15.282 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:15.282 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:15.282 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:15.282 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:15.282 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:15.282 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:15.541 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:15.541 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:15.541 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:15.541 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:15.541 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:15.541 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:15.541 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:15.541 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:15.541 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:15.541 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:15.541 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:15.541 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:15.541 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:15.541 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:15.541 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:15.801 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:15.801 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:15.801 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:15.801 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:15.801 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:15.801 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:15.801 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:15.801 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:15.801 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:15.801 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:15.801 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.801 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.801 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.801 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:15.801 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.801 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.801 [2024-12-06 09:51:40.873321] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:15.801 [2024-12-06 09:51:40.873436] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.801 [2024-12-06 09:51:40.873476] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:15.801 [2024-12-06 09:51:40.873529] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.801 [2024-12-06 09:51:40.875701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.801 [2024-12-06 09:51:40.875772] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:15.801 [2024-12-06 09:51:40.875920] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:15.801 [2024-12-06 09:51:40.876019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:15.801 [2024-12-06 09:51:40.876234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:15.801 [2024-12-06 09:51:40.876387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:15.801 spare 00:14:15.801 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.801 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:15.801 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.801 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.801 [2024-12-06 09:51:40.976353] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:15.801 [2024-12-06 09:51:40.976450] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:15.801 [2024-12-06 09:51:40.976836] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:14:15.801 [2024-12-06 09:51:40.977092] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:15.801 [2024-12-06 09:51:40.977157] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:15.801 [2024-12-06 09:51:40.977427] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.801 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.801 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:15.801 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.801 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.802 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:15.802 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:15.802 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:15.802 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.802 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.802 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.802 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.802 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.802 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.802 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.802 09:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.802 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.802 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.802 "name": "raid_bdev1", 00:14:15.802 "uuid": "49d35cd1-787d-4ce1-88dc-a7d84819eeb1", 00:14:15.802 "strip_size_kb": 0, 00:14:15.802 "state": "online", 00:14:15.802 "raid_level": "raid1", 00:14:15.802 "superblock": true, 00:14:15.802 "num_base_bdevs": 4, 00:14:15.802 "num_base_bdevs_discovered": 3, 00:14:15.802 "num_base_bdevs_operational": 3, 00:14:15.802 "base_bdevs_list": [ 00:14:15.802 { 00:14:15.802 "name": "spare", 00:14:15.802 "uuid": "e17cb8b7-605c-5449-8b92-dc5bd51c7284", 00:14:15.802 "is_configured": true, 00:14:15.802 "data_offset": 2048, 00:14:15.802 "data_size": 63488 00:14:15.802 }, 00:14:15.802 { 00:14:15.802 "name": null, 00:14:15.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.802 "is_configured": false, 00:14:15.802 "data_offset": 2048, 00:14:15.802 "data_size": 63488 00:14:15.802 }, 00:14:15.802 { 00:14:15.802 "name": "BaseBdev3", 00:14:15.802 "uuid": "f7b921e5-6047-5bd8-be46-a4aa57c339e5", 00:14:15.802 "is_configured": true, 00:14:15.802 "data_offset": 2048, 00:14:15.802 "data_size": 63488 00:14:15.802 }, 00:14:15.802 { 00:14:15.802 "name": "BaseBdev4", 00:14:15.802 "uuid": "9d7911be-dd64-5986-a543-d6e4e99fd056", 00:14:15.802 "is_configured": true, 00:14:15.802 "data_offset": 2048, 00:14:15.802 "data_size": 63488 00:14:15.802 } 00:14:15.802 ] 00:14:15.802 }' 00:14:15.802 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.802 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.370 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:16.370 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.370 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:16.370 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:16.370 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.370 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.370 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.370 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.370 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.370 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.370 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.370 "name": "raid_bdev1", 00:14:16.370 "uuid": "49d35cd1-787d-4ce1-88dc-a7d84819eeb1", 00:14:16.370 "strip_size_kb": 0, 00:14:16.370 "state": "online", 00:14:16.370 "raid_level": "raid1", 00:14:16.370 "superblock": true, 00:14:16.370 "num_base_bdevs": 4, 00:14:16.370 "num_base_bdevs_discovered": 3, 00:14:16.370 "num_base_bdevs_operational": 3, 00:14:16.370 "base_bdevs_list": [ 00:14:16.370 { 00:14:16.370 "name": "spare", 00:14:16.370 "uuid": "e17cb8b7-605c-5449-8b92-dc5bd51c7284", 00:14:16.370 "is_configured": true, 00:14:16.370 "data_offset": 2048, 00:14:16.370 "data_size": 63488 00:14:16.370 }, 00:14:16.370 { 00:14:16.370 "name": null, 00:14:16.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.370 "is_configured": false, 00:14:16.370 "data_offset": 2048, 00:14:16.370 "data_size": 63488 00:14:16.370 }, 00:14:16.370 { 00:14:16.370 "name": "BaseBdev3", 00:14:16.370 "uuid": "f7b921e5-6047-5bd8-be46-a4aa57c339e5", 00:14:16.370 "is_configured": true, 00:14:16.370 "data_offset": 2048, 00:14:16.370 "data_size": 63488 00:14:16.370 }, 00:14:16.370 { 00:14:16.370 "name": "BaseBdev4", 00:14:16.370 "uuid": "9d7911be-dd64-5986-a543-d6e4e99fd056", 00:14:16.370 "is_configured": true, 00:14:16.370 "data_offset": 2048, 00:14:16.370 "data_size": 63488 00:14:16.371 } 00:14:16.371 ] 00:14:16.371 }' 00:14:16.371 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.371 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:16.371 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.371 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:16.371 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.371 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:16.371 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.371 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.371 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.371 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.371 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:16.371 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.371 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.371 [2024-12-06 09:51:41.596400] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:16.371 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.371 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:16.371 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.371 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.371 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.371 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.371 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:16.371 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.371 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.371 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.371 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.371 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.371 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.371 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.371 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.371 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.630 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.630 "name": "raid_bdev1", 00:14:16.630 "uuid": "49d35cd1-787d-4ce1-88dc-a7d84819eeb1", 00:14:16.630 "strip_size_kb": 0, 00:14:16.630 "state": "online", 00:14:16.630 "raid_level": "raid1", 00:14:16.630 "superblock": true, 00:14:16.630 "num_base_bdevs": 4, 00:14:16.630 "num_base_bdevs_discovered": 2, 00:14:16.630 "num_base_bdevs_operational": 2, 00:14:16.630 "base_bdevs_list": [ 00:14:16.630 { 00:14:16.630 "name": null, 00:14:16.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.630 "is_configured": false, 00:14:16.630 "data_offset": 0, 00:14:16.630 "data_size": 63488 00:14:16.630 }, 00:14:16.630 { 00:14:16.630 "name": null, 00:14:16.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.630 "is_configured": false, 00:14:16.630 "data_offset": 2048, 00:14:16.630 "data_size": 63488 00:14:16.630 }, 00:14:16.630 { 00:14:16.630 "name": "BaseBdev3", 00:14:16.630 "uuid": "f7b921e5-6047-5bd8-be46-a4aa57c339e5", 00:14:16.630 "is_configured": true, 00:14:16.630 "data_offset": 2048, 00:14:16.630 "data_size": 63488 00:14:16.630 }, 00:14:16.630 { 00:14:16.630 "name": "BaseBdev4", 00:14:16.630 "uuid": "9d7911be-dd64-5986-a543-d6e4e99fd056", 00:14:16.630 "is_configured": true, 00:14:16.630 "data_offset": 2048, 00:14:16.630 "data_size": 63488 00:14:16.630 } 00:14:16.630 ] 00:14:16.630 }' 00:14:16.630 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.630 09:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.889 09:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:16.889 09:51:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.889 09:51:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.889 [2024-12-06 09:51:42.055989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:16.889 [2024-12-06 09:51:42.056281] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:16.889 [2024-12-06 09:51:42.056349] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:16.889 [2024-12-06 09:51:42.056433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:16.889 [2024-12-06 09:51:42.070929] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:14:16.889 09:51:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.889 09:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:16.889 [2024-12-06 09:51:42.072825] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:17.825 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.825 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.825 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:17.825 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:17.825 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.825 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.825 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.825 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.825 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.085 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.085 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.085 "name": "raid_bdev1", 00:14:18.085 "uuid": "49d35cd1-787d-4ce1-88dc-a7d84819eeb1", 00:14:18.085 "strip_size_kb": 0, 00:14:18.085 "state": "online", 00:14:18.085 "raid_level": "raid1", 00:14:18.085 "superblock": true, 00:14:18.085 "num_base_bdevs": 4, 00:14:18.085 "num_base_bdevs_discovered": 3, 00:14:18.085 "num_base_bdevs_operational": 3, 00:14:18.085 "process": { 00:14:18.085 "type": "rebuild", 00:14:18.085 "target": "spare", 00:14:18.085 "progress": { 00:14:18.085 "blocks": 20480, 00:14:18.085 "percent": 32 00:14:18.085 } 00:14:18.085 }, 00:14:18.085 "base_bdevs_list": [ 00:14:18.085 { 00:14:18.085 "name": "spare", 00:14:18.085 "uuid": "e17cb8b7-605c-5449-8b92-dc5bd51c7284", 00:14:18.085 "is_configured": true, 00:14:18.085 "data_offset": 2048, 00:14:18.085 "data_size": 63488 00:14:18.085 }, 00:14:18.085 { 00:14:18.085 "name": null, 00:14:18.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.085 "is_configured": false, 00:14:18.085 "data_offset": 2048, 00:14:18.085 "data_size": 63488 00:14:18.085 }, 00:14:18.085 { 00:14:18.085 "name": "BaseBdev3", 00:14:18.085 "uuid": "f7b921e5-6047-5bd8-be46-a4aa57c339e5", 00:14:18.085 "is_configured": true, 00:14:18.085 "data_offset": 2048, 00:14:18.085 "data_size": 63488 00:14:18.085 }, 00:14:18.085 { 00:14:18.085 "name": "BaseBdev4", 00:14:18.085 "uuid": "9d7911be-dd64-5986-a543-d6e4e99fd056", 00:14:18.085 "is_configured": true, 00:14:18.085 "data_offset": 2048, 00:14:18.085 "data_size": 63488 00:14:18.085 } 00:14:18.085 ] 00:14:18.085 }' 00:14:18.085 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.085 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:18.085 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.085 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:18.085 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:18.085 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.085 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.085 [2024-12-06 09:51:43.212489] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:18.085 [2024-12-06 09:51:43.277998] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:18.085 [2024-12-06 09:51:43.278057] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.085 [2024-12-06 09:51:43.278095] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:18.085 [2024-12-06 09:51:43.278102] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:18.085 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.085 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:18.085 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.085 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.085 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.085 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.085 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:18.085 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.085 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.085 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.085 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.085 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.085 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.085 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.085 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.085 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.345 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.345 "name": "raid_bdev1", 00:14:18.345 "uuid": "49d35cd1-787d-4ce1-88dc-a7d84819eeb1", 00:14:18.345 "strip_size_kb": 0, 00:14:18.345 "state": "online", 00:14:18.345 "raid_level": "raid1", 00:14:18.345 "superblock": true, 00:14:18.345 "num_base_bdevs": 4, 00:14:18.345 "num_base_bdevs_discovered": 2, 00:14:18.345 "num_base_bdevs_operational": 2, 00:14:18.345 "base_bdevs_list": [ 00:14:18.345 { 00:14:18.345 "name": null, 00:14:18.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.345 "is_configured": false, 00:14:18.345 "data_offset": 0, 00:14:18.345 "data_size": 63488 00:14:18.345 }, 00:14:18.345 { 00:14:18.345 "name": null, 00:14:18.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.345 "is_configured": false, 00:14:18.345 "data_offset": 2048, 00:14:18.345 "data_size": 63488 00:14:18.345 }, 00:14:18.345 { 00:14:18.345 "name": "BaseBdev3", 00:14:18.345 "uuid": "f7b921e5-6047-5bd8-be46-a4aa57c339e5", 00:14:18.345 "is_configured": true, 00:14:18.345 "data_offset": 2048, 00:14:18.345 "data_size": 63488 00:14:18.345 }, 00:14:18.345 { 00:14:18.345 "name": "BaseBdev4", 00:14:18.345 "uuid": "9d7911be-dd64-5986-a543-d6e4e99fd056", 00:14:18.345 "is_configured": true, 00:14:18.345 "data_offset": 2048, 00:14:18.345 "data_size": 63488 00:14:18.345 } 00:14:18.345 ] 00:14:18.345 }' 00:14:18.345 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.345 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.605 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:18.605 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.605 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.605 [2024-12-06 09:51:43.726222] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:18.605 [2024-12-06 09:51:43.726341] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.605 [2024-12-06 09:51:43.726403] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:18.605 [2024-12-06 09:51:43.726438] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.605 [2024-12-06 09:51:43.726949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.605 [2024-12-06 09:51:43.727010] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:18.605 [2024-12-06 09:51:43.727154] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:18.605 [2024-12-06 09:51:43.727214] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:18.605 [2024-12-06 09:51:43.727275] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:18.605 [2024-12-06 09:51:43.727332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:18.605 [2024-12-06 09:51:43.741684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:14:18.605 spare 00:14:18.605 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.605 [2024-12-06 09:51:43.743528] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:18.605 09:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:19.543 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:19.543 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.543 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:19.543 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:19.543 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.543 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.543 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.543 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.543 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.543 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.543 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.543 "name": "raid_bdev1", 00:14:19.543 "uuid": "49d35cd1-787d-4ce1-88dc-a7d84819eeb1", 00:14:19.543 "strip_size_kb": 0, 00:14:19.543 "state": "online", 00:14:19.543 "raid_level": "raid1", 00:14:19.543 "superblock": true, 00:14:19.543 "num_base_bdevs": 4, 00:14:19.543 "num_base_bdevs_discovered": 3, 00:14:19.543 "num_base_bdevs_operational": 3, 00:14:19.543 "process": { 00:14:19.543 "type": "rebuild", 00:14:19.543 "target": "spare", 00:14:19.543 "progress": { 00:14:19.543 "blocks": 20480, 00:14:19.543 "percent": 32 00:14:19.543 } 00:14:19.543 }, 00:14:19.543 "base_bdevs_list": [ 00:14:19.543 { 00:14:19.543 "name": "spare", 00:14:19.543 "uuid": "e17cb8b7-605c-5449-8b92-dc5bd51c7284", 00:14:19.543 "is_configured": true, 00:14:19.543 "data_offset": 2048, 00:14:19.543 "data_size": 63488 00:14:19.543 }, 00:14:19.544 { 00:14:19.544 "name": null, 00:14:19.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.544 "is_configured": false, 00:14:19.544 "data_offset": 2048, 00:14:19.544 "data_size": 63488 00:14:19.544 }, 00:14:19.544 { 00:14:19.544 "name": "BaseBdev3", 00:14:19.544 "uuid": "f7b921e5-6047-5bd8-be46-a4aa57c339e5", 00:14:19.544 "is_configured": true, 00:14:19.544 "data_offset": 2048, 00:14:19.544 "data_size": 63488 00:14:19.544 }, 00:14:19.544 { 00:14:19.544 "name": "BaseBdev4", 00:14:19.544 "uuid": "9d7911be-dd64-5986-a543-d6e4e99fd056", 00:14:19.544 "is_configured": true, 00:14:19.544 "data_offset": 2048, 00:14:19.544 "data_size": 63488 00:14:19.544 } 00:14:19.544 ] 00:14:19.544 }' 00:14:19.544 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.804 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:19.804 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.804 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.804 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:19.804 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.804 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.804 [2024-12-06 09:51:44.911566] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:19.804 [2024-12-06 09:51:44.948571] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:19.804 [2024-12-06 09:51:44.948729] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.804 [2024-12-06 09:51:44.948786] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:19.804 [2024-12-06 09:51:44.948821] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:19.804 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.804 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:19.804 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:19.804 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.804 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:19.804 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:19.804 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:19.804 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.804 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.804 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.804 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.804 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.804 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.804 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.804 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.804 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.804 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.804 "name": "raid_bdev1", 00:14:19.804 "uuid": "49d35cd1-787d-4ce1-88dc-a7d84819eeb1", 00:14:19.804 "strip_size_kb": 0, 00:14:19.804 "state": "online", 00:14:19.804 "raid_level": "raid1", 00:14:19.804 "superblock": true, 00:14:19.804 "num_base_bdevs": 4, 00:14:19.804 "num_base_bdevs_discovered": 2, 00:14:19.804 "num_base_bdevs_operational": 2, 00:14:19.804 "base_bdevs_list": [ 00:14:19.804 { 00:14:19.804 "name": null, 00:14:19.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.804 "is_configured": false, 00:14:19.804 "data_offset": 0, 00:14:19.804 "data_size": 63488 00:14:19.804 }, 00:14:19.804 { 00:14:19.804 "name": null, 00:14:19.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.804 "is_configured": false, 00:14:19.804 "data_offset": 2048, 00:14:19.804 "data_size": 63488 00:14:19.804 }, 00:14:19.804 { 00:14:19.804 "name": "BaseBdev3", 00:14:19.804 "uuid": "f7b921e5-6047-5bd8-be46-a4aa57c339e5", 00:14:19.804 "is_configured": true, 00:14:19.804 "data_offset": 2048, 00:14:19.804 "data_size": 63488 00:14:19.804 }, 00:14:19.804 { 00:14:19.804 "name": "BaseBdev4", 00:14:19.804 "uuid": "9d7911be-dd64-5986-a543-d6e4e99fd056", 00:14:19.804 "is_configured": true, 00:14:19.804 "data_offset": 2048, 00:14:19.804 "data_size": 63488 00:14:19.804 } 00:14:19.804 ] 00:14:19.804 }' 00:14:19.804 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.804 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.373 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:20.373 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.373 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:20.373 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:20.373 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.373 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.373 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.373 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.373 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.373 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.373 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.373 "name": "raid_bdev1", 00:14:20.373 "uuid": "49d35cd1-787d-4ce1-88dc-a7d84819eeb1", 00:14:20.373 "strip_size_kb": 0, 00:14:20.373 "state": "online", 00:14:20.373 "raid_level": "raid1", 00:14:20.373 "superblock": true, 00:14:20.373 "num_base_bdevs": 4, 00:14:20.373 "num_base_bdevs_discovered": 2, 00:14:20.373 "num_base_bdevs_operational": 2, 00:14:20.373 "base_bdevs_list": [ 00:14:20.373 { 00:14:20.373 "name": null, 00:14:20.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.373 "is_configured": false, 00:14:20.373 "data_offset": 0, 00:14:20.373 "data_size": 63488 00:14:20.373 }, 00:14:20.373 { 00:14:20.373 "name": null, 00:14:20.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.373 "is_configured": false, 00:14:20.373 "data_offset": 2048, 00:14:20.373 "data_size": 63488 00:14:20.373 }, 00:14:20.373 { 00:14:20.373 "name": "BaseBdev3", 00:14:20.373 "uuid": "f7b921e5-6047-5bd8-be46-a4aa57c339e5", 00:14:20.373 "is_configured": true, 00:14:20.373 "data_offset": 2048, 00:14:20.373 "data_size": 63488 00:14:20.373 }, 00:14:20.373 { 00:14:20.373 "name": "BaseBdev4", 00:14:20.373 "uuid": "9d7911be-dd64-5986-a543-d6e4e99fd056", 00:14:20.373 "is_configured": true, 00:14:20.373 "data_offset": 2048, 00:14:20.373 "data_size": 63488 00:14:20.373 } 00:14:20.373 ] 00:14:20.373 }' 00:14:20.373 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.373 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:20.373 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.374 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:20.374 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:20.374 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.374 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.374 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.374 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:20.374 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.374 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.374 [2024-12-06 09:51:45.528706] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:20.374 [2024-12-06 09:51:45.528766] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.374 [2024-12-06 09:51:45.528784] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:14:20.374 [2024-12-06 09:51:45.528795] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.374 [2024-12-06 09:51:45.529251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.374 [2024-12-06 09:51:45.529289] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:20.374 [2024-12-06 09:51:45.529381] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:20.374 [2024-12-06 09:51:45.529396] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:20.374 [2024-12-06 09:51:45.529405] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:20.374 [2024-12-06 09:51:45.529416] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:20.374 BaseBdev1 00:14:20.374 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.374 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:21.313 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:21.313 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:21.313 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.313 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:21.313 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:21.313 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:21.313 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.313 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.313 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.313 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.313 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.313 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.313 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.313 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.313 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.313 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.313 "name": "raid_bdev1", 00:14:21.313 "uuid": "49d35cd1-787d-4ce1-88dc-a7d84819eeb1", 00:14:21.313 "strip_size_kb": 0, 00:14:21.313 "state": "online", 00:14:21.313 "raid_level": "raid1", 00:14:21.313 "superblock": true, 00:14:21.313 "num_base_bdevs": 4, 00:14:21.313 "num_base_bdevs_discovered": 2, 00:14:21.313 "num_base_bdevs_operational": 2, 00:14:21.313 "base_bdevs_list": [ 00:14:21.313 { 00:14:21.313 "name": null, 00:14:21.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.313 "is_configured": false, 00:14:21.313 "data_offset": 0, 00:14:21.313 "data_size": 63488 00:14:21.313 }, 00:14:21.313 { 00:14:21.313 "name": null, 00:14:21.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.313 "is_configured": false, 00:14:21.313 "data_offset": 2048, 00:14:21.313 "data_size": 63488 00:14:21.313 }, 00:14:21.313 { 00:14:21.313 "name": "BaseBdev3", 00:14:21.313 "uuid": "f7b921e5-6047-5bd8-be46-a4aa57c339e5", 00:14:21.313 "is_configured": true, 00:14:21.313 "data_offset": 2048, 00:14:21.313 "data_size": 63488 00:14:21.313 }, 00:14:21.313 { 00:14:21.313 "name": "BaseBdev4", 00:14:21.313 "uuid": "9d7911be-dd64-5986-a543-d6e4e99fd056", 00:14:21.313 "is_configured": true, 00:14:21.313 "data_offset": 2048, 00:14:21.313 "data_size": 63488 00:14:21.313 } 00:14:21.313 ] 00:14:21.313 }' 00:14:21.313 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.313 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.882 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:21.882 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.882 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:21.882 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:21.882 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.882 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.882 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.882 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.882 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.882 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.882 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.882 "name": "raid_bdev1", 00:14:21.882 "uuid": "49d35cd1-787d-4ce1-88dc-a7d84819eeb1", 00:14:21.882 "strip_size_kb": 0, 00:14:21.882 "state": "online", 00:14:21.882 "raid_level": "raid1", 00:14:21.882 "superblock": true, 00:14:21.882 "num_base_bdevs": 4, 00:14:21.882 "num_base_bdevs_discovered": 2, 00:14:21.882 "num_base_bdevs_operational": 2, 00:14:21.882 "base_bdevs_list": [ 00:14:21.882 { 00:14:21.882 "name": null, 00:14:21.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.882 "is_configured": false, 00:14:21.882 "data_offset": 0, 00:14:21.882 "data_size": 63488 00:14:21.882 }, 00:14:21.882 { 00:14:21.882 "name": null, 00:14:21.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.882 "is_configured": false, 00:14:21.882 "data_offset": 2048, 00:14:21.882 "data_size": 63488 00:14:21.882 }, 00:14:21.882 { 00:14:21.882 "name": "BaseBdev3", 00:14:21.882 "uuid": "f7b921e5-6047-5bd8-be46-a4aa57c339e5", 00:14:21.882 "is_configured": true, 00:14:21.882 "data_offset": 2048, 00:14:21.882 "data_size": 63488 00:14:21.882 }, 00:14:21.882 { 00:14:21.882 "name": "BaseBdev4", 00:14:21.882 "uuid": "9d7911be-dd64-5986-a543-d6e4e99fd056", 00:14:21.882 "is_configured": true, 00:14:21.882 "data_offset": 2048, 00:14:21.882 "data_size": 63488 00:14:21.882 } 00:14:21.882 ] 00:14:21.882 }' 00:14:21.882 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.883 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:21.883 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.883 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:21.883 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:21.883 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:14:21.883 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:21.883 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:21.883 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:21.883 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:21.883 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:21.883 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:21.883 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.883 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.883 [2024-12-06 09:51:47.122879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:21.883 [2024-12-06 09:51:47.123050] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:21.883 [2024-12-06 09:51:47.123061] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:21.883 request: 00:14:21.883 { 00:14:21.883 "base_bdev": "BaseBdev1", 00:14:21.883 "raid_bdev": "raid_bdev1", 00:14:21.883 "method": "bdev_raid_add_base_bdev", 00:14:21.883 "req_id": 1 00:14:21.883 } 00:14:21.883 Got JSON-RPC error response 00:14:21.883 response: 00:14:21.883 { 00:14:21.883 "code": -22, 00:14:21.883 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:21.883 } 00:14:21.883 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:21.883 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:14:21.883 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:21.883 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:21.883 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:21.883 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:23.263 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:23.263 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:23.263 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:23.263 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:23.263 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:23.263 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:23.263 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.263 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.263 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.263 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.263 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.263 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.263 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.263 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.263 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.263 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.263 "name": "raid_bdev1", 00:14:23.263 "uuid": "49d35cd1-787d-4ce1-88dc-a7d84819eeb1", 00:14:23.263 "strip_size_kb": 0, 00:14:23.263 "state": "online", 00:14:23.263 "raid_level": "raid1", 00:14:23.263 "superblock": true, 00:14:23.263 "num_base_bdevs": 4, 00:14:23.263 "num_base_bdevs_discovered": 2, 00:14:23.263 "num_base_bdevs_operational": 2, 00:14:23.263 "base_bdevs_list": [ 00:14:23.263 { 00:14:23.263 "name": null, 00:14:23.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.263 "is_configured": false, 00:14:23.263 "data_offset": 0, 00:14:23.263 "data_size": 63488 00:14:23.263 }, 00:14:23.263 { 00:14:23.263 "name": null, 00:14:23.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.263 "is_configured": false, 00:14:23.263 "data_offset": 2048, 00:14:23.263 "data_size": 63488 00:14:23.263 }, 00:14:23.263 { 00:14:23.263 "name": "BaseBdev3", 00:14:23.263 "uuid": "f7b921e5-6047-5bd8-be46-a4aa57c339e5", 00:14:23.263 "is_configured": true, 00:14:23.263 "data_offset": 2048, 00:14:23.263 "data_size": 63488 00:14:23.263 }, 00:14:23.263 { 00:14:23.263 "name": "BaseBdev4", 00:14:23.263 "uuid": "9d7911be-dd64-5986-a543-d6e4e99fd056", 00:14:23.263 "is_configured": true, 00:14:23.263 "data_offset": 2048, 00:14:23.263 "data_size": 63488 00:14:23.263 } 00:14:23.263 ] 00:14:23.263 }' 00:14:23.263 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.263 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.523 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:23.523 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.523 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:23.523 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:23.523 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.523 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.523 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.523 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.523 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.523 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.523 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.523 "name": "raid_bdev1", 00:14:23.523 "uuid": "49d35cd1-787d-4ce1-88dc-a7d84819eeb1", 00:14:23.523 "strip_size_kb": 0, 00:14:23.523 "state": "online", 00:14:23.523 "raid_level": "raid1", 00:14:23.523 "superblock": true, 00:14:23.523 "num_base_bdevs": 4, 00:14:23.523 "num_base_bdevs_discovered": 2, 00:14:23.523 "num_base_bdevs_operational": 2, 00:14:23.523 "base_bdevs_list": [ 00:14:23.523 { 00:14:23.523 "name": null, 00:14:23.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.523 "is_configured": false, 00:14:23.523 "data_offset": 0, 00:14:23.523 "data_size": 63488 00:14:23.523 }, 00:14:23.523 { 00:14:23.523 "name": null, 00:14:23.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.523 "is_configured": false, 00:14:23.523 "data_offset": 2048, 00:14:23.523 "data_size": 63488 00:14:23.523 }, 00:14:23.523 { 00:14:23.523 "name": "BaseBdev3", 00:14:23.523 "uuid": "f7b921e5-6047-5bd8-be46-a4aa57c339e5", 00:14:23.523 "is_configured": true, 00:14:23.523 "data_offset": 2048, 00:14:23.523 "data_size": 63488 00:14:23.523 }, 00:14:23.523 { 00:14:23.523 "name": "BaseBdev4", 00:14:23.523 "uuid": "9d7911be-dd64-5986-a543-d6e4e99fd056", 00:14:23.523 "is_configured": true, 00:14:23.523 "data_offset": 2048, 00:14:23.523 "data_size": 63488 00:14:23.523 } 00:14:23.523 ] 00:14:23.523 }' 00:14:23.523 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.523 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:23.523 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.523 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:23.524 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79044 00:14:23.524 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79044 ']' 00:14:23.524 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79044 00:14:23.524 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:14:23.524 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:23.524 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79044 00:14:23.524 killing process with pid 79044 00:14:23.524 Received shutdown signal, test time was about 17.842477 seconds 00:14:23.524 00:14:23.524 Latency(us) 00:14:23.524 [2024-12-06T09:51:48.797Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.524 [2024-12-06T09:51:48.797Z] =================================================================================================================== 00:14:23.524 [2024-12-06T09:51:48.797Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:23.524 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:23.524 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:23.524 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79044' 00:14:23.524 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79044 00:14:23.524 [2024-12-06 09:51:48.743814] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:23.524 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79044 00:14:23.524 [2024-12-06 09:51:48.743949] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:23.524 [2024-12-06 09:51:48.744042] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:23.524 [2024-12-06 09:51:48.744055] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:24.095 [2024-12-06 09:51:49.156376] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:25.477 09:51:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:25.477 00:14:25.477 real 0m21.298s 00:14:25.477 user 0m27.786s 00:14:25.477 sys 0m2.542s 00:14:25.477 09:51:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:25.477 09:51:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.477 ************************************ 00:14:25.477 END TEST raid_rebuild_test_sb_io 00:14:25.477 ************************************ 00:14:25.477 09:51:50 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:25.477 09:51:50 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:14:25.477 09:51:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:25.477 09:51:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:25.477 09:51:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:25.477 ************************************ 00:14:25.477 START TEST raid5f_state_function_test 00:14:25.477 ************************************ 00:14:25.477 09:51:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:14:25.477 09:51:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:25.477 09:51:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:25.477 09:51:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:25.477 09:51:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:25.477 09:51:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:25.477 09:51:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:25.477 09:51:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:25.477 09:51:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:25.477 09:51:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:25.477 09:51:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:25.477 09:51:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:25.477 09:51:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:25.477 09:51:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:25.477 09:51:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:25.477 09:51:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:25.477 09:51:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:25.477 09:51:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:25.477 09:51:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:25.477 09:51:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:25.478 09:51:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:25.478 09:51:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:25.478 09:51:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:25.478 09:51:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:25.478 09:51:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:25.478 09:51:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:25.478 09:51:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:25.478 09:51:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79766 00:14:25.478 09:51:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:25.478 09:51:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79766' 00:14:25.478 Process raid pid: 79766 00:14:25.478 09:51:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79766 00:14:25.478 09:51:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 79766 ']' 00:14:25.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.478 09:51:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.478 09:51:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:25.478 09:51:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.478 09:51:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:25.478 09:51:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.478 [2024-12-06 09:51:50.499026] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:14:25.478 [2024-12-06 09:51:50.499605] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.478 [2024-12-06 09:51:50.675094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.738 [2024-12-06 09:51:50.792105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.738 [2024-12-06 09:51:50.992923] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:25.738 [2024-12-06 09:51:50.992966] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:26.307 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:26.307 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:26.307 09:51:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:26.307 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.307 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.307 [2024-12-06 09:51:51.328334] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:26.307 [2024-12-06 09:51:51.328466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:26.307 [2024-12-06 09:51:51.328481] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:26.307 [2024-12-06 09:51:51.328491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:26.307 [2024-12-06 09:51:51.328498] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:26.307 [2024-12-06 09:51:51.328507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:26.307 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.307 09:51:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:26.307 09:51:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.307 09:51:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:26.308 09:51:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:26.308 09:51:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.308 09:51:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.308 09:51:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.308 09:51:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.308 09:51:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.308 09:51:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.308 09:51:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.308 09:51:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.308 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.308 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.308 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.308 09:51:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.308 "name": "Existed_Raid", 00:14:26.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.308 "strip_size_kb": 64, 00:14:26.308 "state": "configuring", 00:14:26.308 "raid_level": "raid5f", 00:14:26.308 "superblock": false, 00:14:26.308 "num_base_bdevs": 3, 00:14:26.308 "num_base_bdevs_discovered": 0, 00:14:26.308 "num_base_bdevs_operational": 3, 00:14:26.308 "base_bdevs_list": [ 00:14:26.308 { 00:14:26.308 "name": "BaseBdev1", 00:14:26.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.308 "is_configured": false, 00:14:26.308 "data_offset": 0, 00:14:26.308 "data_size": 0 00:14:26.308 }, 00:14:26.308 { 00:14:26.308 "name": "BaseBdev2", 00:14:26.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.308 "is_configured": false, 00:14:26.308 "data_offset": 0, 00:14:26.308 "data_size": 0 00:14:26.308 }, 00:14:26.308 { 00:14:26.308 "name": "BaseBdev3", 00:14:26.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.308 "is_configured": false, 00:14:26.308 "data_offset": 0, 00:14:26.308 "data_size": 0 00:14:26.308 } 00:14:26.308 ] 00:14:26.308 }' 00:14:26.308 09:51:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.308 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.567 09:51:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:26.567 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.567 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.567 [2024-12-06 09:51:51.791494] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:26.567 [2024-12-06 09:51:51.791592] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:26.567 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.567 09:51:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:26.567 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.567 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.567 [2024-12-06 09:51:51.803463] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:26.567 [2024-12-06 09:51:51.803550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:26.567 [2024-12-06 09:51:51.803578] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:26.567 [2024-12-06 09:51:51.803600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:26.567 [2024-12-06 09:51:51.803618] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:26.567 [2024-12-06 09:51:51.803638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:26.567 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.567 09:51:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:26.567 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.567 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.826 [2024-12-06 09:51:51.851183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:26.826 BaseBdev1 00:14:26.826 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.826 09:51:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:26.826 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:26.826 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:26.826 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:26.826 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:26.826 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:26.826 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:26.826 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.826 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.826 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.826 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:26.826 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.826 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.826 [ 00:14:26.826 { 00:14:26.826 "name": "BaseBdev1", 00:14:26.826 "aliases": [ 00:14:26.826 "f33ca43d-c534-4e63-8e46-13f63e888d5d" 00:14:26.826 ], 00:14:26.826 "product_name": "Malloc disk", 00:14:26.826 "block_size": 512, 00:14:26.826 "num_blocks": 65536, 00:14:26.826 "uuid": "f33ca43d-c534-4e63-8e46-13f63e888d5d", 00:14:26.826 "assigned_rate_limits": { 00:14:26.826 "rw_ios_per_sec": 0, 00:14:26.826 "rw_mbytes_per_sec": 0, 00:14:26.826 "r_mbytes_per_sec": 0, 00:14:26.826 "w_mbytes_per_sec": 0 00:14:26.826 }, 00:14:26.826 "claimed": true, 00:14:26.826 "claim_type": "exclusive_write", 00:14:26.826 "zoned": false, 00:14:26.826 "supported_io_types": { 00:14:26.826 "read": true, 00:14:26.826 "write": true, 00:14:26.826 "unmap": true, 00:14:26.826 "flush": true, 00:14:26.826 "reset": true, 00:14:26.826 "nvme_admin": false, 00:14:26.826 "nvme_io": false, 00:14:26.826 "nvme_io_md": false, 00:14:26.826 "write_zeroes": true, 00:14:26.826 "zcopy": true, 00:14:26.826 "get_zone_info": false, 00:14:26.826 "zone_management": false, 00:14:26.826 "zone_append": false, 00:14:26.826 "compare": false, 00:14:26.826 "compare_and_write": false, 00:14:26.826 "abort": true, 00:14:26.826 "seek_hole": false, 00:14:26.826 "seek_data": false, 00:14:26.826 "copy": true, 00:14:26.826 "nvme_iov_md": false 00:14:26.826 }, 00:14:26.826 "memory_domains": [ 00:14:26.826 { 00:14:26.826 "dma_device_id": "system", 00:14:26.826 "dma_device_type": 1 00:14:26.826 }, 00:14:26.826 { 00:14:26.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.826 "dma_device_type": 2 00:14:26.826 } 00:14:26.826 ], 00:14:26.826 "driver_specific": {} 00:14:26.826 } 00:14:26.826 ] 00:14:26.826 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.826 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:26.826 09:51:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:26.826 09:51:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.826 09:51:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:26.826 09:51:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:26.826 09:51:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.826 09:51:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.826 09:51:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.826 09:51:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.826 09:51:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.826 09:51:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.826 09:51:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.826 09:51:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.826 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.826 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.826 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.826 09:51:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.826 "name": "Existed_Raid", 00:14:26.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.826 "strip_size_kb": 64, 00:14:26.826 "state": "configuring", 00:14:26.826 "raid_level": "raid5f", 00:14:26.826 "superblock": false, 00:14:26.826 "num_base_bdevs": 3, 00:14:26.826 "num_base_bdevs_discovered": 1, 00:14:26.826 "num_base_bdevs_operational": 3, 00:14:26.826 "base_bdevs_list": [ 00:14:26.826 { 00:14:26.826 "name": "BaseBdev1", 00:14:26.826 "uuid": "f33ca43d-c534-4e63-8e46-13f63e888d5d", 00:14:26.826 "is_configured": true, 00:14:26.826 "data_offset": 0, 00:14:26.826 "data_size": 65536 00:14:26.826 }, 00:14:26.826 { 00:14:26.826 "name": "BaseBdev2", 00:14:26.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.826 "is_configured": false, 00:14:26.826 "data_offset": 0, 00:14:26.826 "data_size": 0 00:14:26.826 }, 00:14:26.826 { 00:14:26.826 "name": "BaseBdev3", 00:14:26.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.826 "is_configured": false, 00:14:26.826 "data_offset": 0, 00:14:26.826 "data_size": 0 00:14:26.826 } 00:14:26.826 ] 00:14:26.826 }' 00:14:26.826 09:51:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.826 09:51:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.086 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:27.086 09:51:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.086 09:51:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.086 [2024-12-06 09:51:52.318431] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:27.086 [2024-12-06 09:51:52.318559] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:27.086 09:51:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.086 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:27.086 09:51:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.086 09:51:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.086 [2024-12-06 09:51:52.330438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:27.086 [2024-12-06 09:51:52.332182] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:27.086 [2024-12-06 09:51:52.332279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:27.086 [2024-12-06 09:51:52.332297] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:27.086 [2024-12-06 09:51:52.332312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:27.086 09:51:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.086 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:27.086 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:27.086 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:27.086 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:27.086 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:27.086 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:27.086 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.086 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:27.086 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.086 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.086 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.086 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.086 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.086 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.086 09:51:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.086 09:51:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.345 09:51:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.345 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.345 "name": "Existed_Raid", 00:14:27.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.345 "strip_size_kb": 64, 00:14:27.345 "state": "configuring", 00:14:27.345 "raid_level": "raid5f", 00:14:27.345 "superblock": false, 00:14:27.345 "num_base_bdevs": 3, 00:14:27.345 "num_base_bdevs_discovered": 1, 00:14:27.345 "num_base_bdevs_operational": 3, 00:14:27.345 "base_bdevs_list": [ 00:14:27.345 { 00:14:27.345 "name": "BaseBdev1", 00:14:27.345 "uuid": "f33ca43d-c534-4e63-8e46-13f63e888d5d", 00:14:27.345 "is_configured": true, 00:14:27.345 "data_offset": 0, 00:14:27.345 "data_size": 65536 00:14:27.345 }, 00:14:27.345 { 00:14:27.345 "name": "BaseBdev2", 00:14:27.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.345 "is_configured": false, 00:14:27.345 "data_offset": 0, 00:14:27.345 "data_size": 0 00:14:27.345 }, 00:14:27.345 { 00:14:27.345 "name": "BaseBdev3", 00:14:27.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.345 "is_configured": false, 00:14:27.345 "data_offset": 0, 00:14:27.345 "data_size": 0 00:14:27.345 } 00:14:27.345 ] 00:14:27.345 }' 00:14:27.345 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.345 09:51:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.606 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:27.606 09:51:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.606 09:51:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.606 [2024-12-06 09:51:52.849238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:27.606 BaseBdev2 00:14:27.606 09:51:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.606 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:27.606 09:51:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:27.606 09:51:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:27.606 09:51:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:27.606 09:51:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:27.606 09:51:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:27.606 09:51:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:27.606 09:51:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.606 09:51:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.606 09:51:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.606 09:51:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:27.606 09:51:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.606 09:51:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.866 [ 00:14:27.866 { 00:14:27.866 "name": "BaseBdev2", 00:14:27.866 "aliases": [ 00:14:27.866 "bddfc6c7-11ff-4b45-85cd-4c8ee8af8f43" 00:14:27.866 ], 00:14:27.866 "product_name": "Malloc disk", 00:14:27.866 "block_size": 512, 00:14:27.866 "num_blocks": 65536, 00:14:27.866 "uuid": "bddfc6c7-11ff-4b45-85cd-4c8ee8af8f43", 00:14:27.866 "assigned_rate_limits": { 00:14:27.866 "rw_ios_per_sec": 0, 00:14:27.866 "rw_mbytes_per_sec": 0, 00:14:27.866 "r_mbytes_per_sec": 0, 00:14:27.866 "w_mbytes_per_sec": 0 00:14:27.866 }, 00:14:27.866 "claimed": true, 00:14:27.866 "claim_type": "exclusive_write", 00:14:27.866 "zoned": false, 00:14:27.866 "supported_io_types": { 00:14:27.866 "read": true, 00:14:27.866 "write": true, 00:14:27.866 "unmap": true, 00:14:27.866 "flush": true, 00:14:27.866 "reset": true, 00:14:27.866 "nvme_admin": false, 00:14:27.866 "nvme_io": false, 00:14:27.866 "nvme_io_md": false, 00:14:27.866 "write_zeroes": true, 00:14:27.866 "zcopy": true, 00:14:27.866 "get_zone_info": false, 00:14:27.866 "zone_management": false, 00:14:27.866 "zone_append": false, 00:14:27.866 "compare": false, 00:14:27.867 "compare_and_write": false, 00:14:27.867 "abort": true, 00:14:27.867 "seek_hole": false, 00:14:27.867 "seek_data": false, 00:14:27.867 "copy": true, 00:14:27.867 "nvme_iov_md": false 00:14:27.867 }, 00:14:27.867 "memory_domains": [ 00:14:27.867 { 00:14:27.867 "dma_device_id": "system", 00:14:27.867 "dma_device_type": 1 00:14:27.867 }, 00:14:27.867 { 00:14:27.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.867 "dma_device_type": 2 00:14:27.867 } 00:14:27.867 ], 00:14:27.867 "driver_specific": {} 00:14:27.867 } 00:14:27.867 ] 00:14:27.867 09:51:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.867 09:51:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:27.867 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:27.867 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:27.867 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:27.867 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:27.867 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:27.867 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:27.867 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.867 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:27.867 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.867 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.867 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.867 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.867 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.867 09:51:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.867 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.867 09:51:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.867 09:51:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.867 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.867 "name": "Existed_Raid", 00:14:27.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.867 "strip_size_kb": 64, 00:14:27.867 "state": "configuring", 00:14:27.867 "raid_level": "raid5f", 00:14:27.867 "superblock": false, 00:14:27.867 "num_base_bdevs": 3, 00:14:27.867 "num_base_bdevs_discovered": 2, 00:14:27.867 "num_base_bdevs_operational": 3, 00:14:27.867 "base_bdevs_list": [ 00:14:27.867 { 00:14:27.867 "name": "BaseBdev1", 00:14:27.867 "uuid": "f33ca43d-c534-4e63-8e46-13f63e888d5d", 00:14:27.867 "is_configured": true, 00:14:27.867 "data_offset": 0, 00:14:27.867 "data_size": 65536 00:14:27.867 }, 00:14:27.867 { 00:14:27.867 "name": "BaseBdev2", 00:14:27.867 "uuid": "bddfc6c7-11ff-4b45-85cd-4c8ee8af8f43", 00:14:27.867 "is_configured": true, 00:14:27.867 "data_offset": 0, 00:14:27.867 "data_size": 65536 00:14:27.867 }, 00:14:27.867 { 00:14:27.867 "name": "BaseBdev3", 00:14:27.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.867 "is_configured": false, 00:14:27.867 "data_offset": 0, 00:14:27.867 "data_size": 0 00:14:27.867 } 00:14:27.867 ] 00:14:27.867 }' 00:14:27.867 09:51:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.867 09:51:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.128 09:51:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:28.128 09:51:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.128 09:51:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.128 [2024-12-06 09:51:53.380972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:28.128 [2024-12-06 09:51:53.381028] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:28.128 [2024-12-06 09:51:53.381043] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:28.128 [2024-12-06 09:51:53.381371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:28.128 [2024-12-06 09:51:53.386650] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:28.128 [2024-12-06 09:51:53.386670] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:28.128 [2024-12-06 09:51:53.386937] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.128 BaseBdev3 00:14:28.128 09:51:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.128 09:51:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:28.128 09:51:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:28.128 09:51:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:28.128 09:51:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:28.128 09:51:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:28.128 09:51:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:28.128 09:51:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:28.128 09:51:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.128 09:51:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.388 09:51:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.388 09:51:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:28.388 09:51:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.388 09:51:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.388 [ 00:14:28.388 { 00:14:28.388 "name": "BaseBdev3", 00:14:28.388 "aliases": [ 00:14:28.388 "da17f097-b8bc-4185-a06d-9e40a838c698" 00:14:28.388 ], 00:14:28.388 "product_name": "Malloc disk", 00:14:28.388 "block_size": 512, 00:14:28.388 "num_blocks": 65536, 00:14:28.388 "uuid": "da17f097-b8bc-4185-a06d-9e40a838c698", 00:14:28.388 "assigned_rate_limits": { 00:14:28.388 "rw_ios_per_sec": 0, 00:14:28.388 "rw_mbytes_per_sec": 0, 00:14:28.388 "r_mbytes_per_sec": 0, 00:14:28.388 "w_mbytes_per_sec": 0 00:14:28.388 }, 00:14:28.388 "claimed": true, 00:14:28.388 "claim_type": "exclusive_write", 00:14:28.388 "zoned": false, 00:14:28.388 "supported_io_types": { 00:14:28.388 "read": true, 00:14:28.388 "write": true, 00:14:28.388 "unmap": true, 00:14:28.388 "flush": true, 00:14:28.388 "reset": true, 00:14:28.388 "nvme_admin": false, 00:14:28.388 "nvme_io": false, 00:14:28.388 "nvme_io_md": false, 00:14:28.388 "write_zeroes": true, 00:14:28.388 "zcopy": true, 00:14:28.388 "get_zone_info": false, 00:14:28.388 "zone_management": false, 00:14:28.388 "zone_append": false, 00:14:28.388 "compare": false, 00:14:28.388 "compare_and_write": false, 00:14:28.388 "abort": true, 00:14:28.388 "seek_hole": false, 00:14:28.388 "seek_data": false, 00:14:28.388 "copy": true, 00:14:28.388 "nvme_iov_md": false 00:14:28.388 }, 00:14:28.388 "memory_domains": [ 00:14:28.388 { 00:14:28.388 "dma_device_id": "system", 00:14:28.388 "dma_device_type": 1 00:14:28.388 }, 00:14:28.388 { 00:14:28.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.388 "dma_device_type": 2 00:14:28.388 } 00:14:28.388 ], 00:14:28.388 "driver_specific": {} 00:14:28.388 } 00:14:28.388 ] 00:14:28.388 09:51:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.388 09:51:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:28.389 09:51:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:28.389 09:51:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:28.389 09:51:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:28.389 09:51:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:28.389 09:51:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:28.389 09:51:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:28.389 09:51:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.389 09:51:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:28.389 09:51:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.389 09:51:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.389 09:51:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.389 09:51:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.389 09:51:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.389 09:51:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.389 09:51:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.389 09:51:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.389 09:51:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.389 09:51:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.389 "name": "Existed_Raid", 00:14:28.389 "uuid": "a0ac37b2-b027-4355-a0a3-f7efd9b04e9c", 00:14:28.389 "strip_size_kb": 64, 00:14:28.389 "state": "online", 00:14:28.389 "raid_level": "raid5f", 00:14:28.389 "superblock": false, 00:14:28.389 "num_base_bdevs": 3, 00:14:28.389 "num_base_bdevs_discovered": 3, 00:14:28.389 "num_base_bdevs_operational": 3, 00:14:28.389 "base_bdevs_list": [ 00:14:28.389 { 00:14:28.389 "name": "BaseBdev1", 00:14:28.389 "uuid": "f33ca43d-c534-4e63-8e46-13f63e888d5d", 00:14:28.389 "is_configured": true, 00:14:28.389 "data_offset": 0, 00:14:28.389 "data_size": 65536 00:14:28.389 }, 00:14:28.389 { 00:14:28.389 "name": "BaseBdev2", 00:14:28.389 "uuid": "bddfc6c7-11ff-4b45-85cd-4c8ee8af8f43", 00:14:28.389 "is_configured": true, 00:14:28.389 "data_offset": 0, 00:14:28.389 "data_size": 65536 00:14:28.389 }, 00:14:28.389 { 00:14:28.389 "name": "BaseBdev3", 00:14:28.389 "uuid": "da17f097-b8bc-4185-a06d-9e40a838c698", 00:14:28.389 "is_configured": true, 00:14:28.389 "data_offset": 0, 00:14:28.389 "data_size": 65536 00:14:28.389 } 00:14:28.389 ] 00:14:28.389 }' 00:14:28.389 09:51:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.389 09:51:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.649 09:51:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:28.649 09:51:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:28.649 09:51:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:28.649 09:51:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:28.649 09:51:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:28.649 09:51:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:28.649 09:51:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:28.649 09:51:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:28.649 09:51:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.649 09:51:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.649 [2024-12-06 09:51:53.892849] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:28.649 09:51:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.909 09:51:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:28.909 "name": "Existed_Raid", 00:14:28.909 "aliases": [ 00:14:28.909 "a0ac37b2-b027-4355-a0a3-f7efd9b04e9c" 00:14:28.909 ], 00:14:28.909 "product_name": "Raid Volume", 00:14:28.909 "block_size": 512, 00:14:28.909 "num_blocks": 131072, 00:14:28.909 "uuid": "a0ac37b2-b027-4355-a0a3-f7efd9b04e9c", 00:14:28.909 "assigned_rate_limits": { 00:14:28.909 "rw_ios_per_sec": 0, 00:14:28.909 "rw_mbytes_per_sec": 0, 00:14:28.909 "r_mbytes_per_sec": 0, 00:14:28.909 "w_mbytes_per_sec": 0 00:14:28.909 }, 00:14:28.909 "claimed": false, 00:14:28.909 "zoned": false, 00:14:28.909 "supported_io_types": { 00:14:28.909 "read": true, 00:14:28.909 "write": true, 00:14:28.909 "unmap": false, 00:14:28.909 "flush": false, 00:14:28.909 "reset": true, 00:14:28.909 "nvme_admin": false, 00:14:28.909 "nvme_io": false, 00:14:28.909 "nvme_io_md": false, 00:14:28.909 "write_zeroes": true, 00:14:28.909 "zcopy": false, 00:14:28.909 "get_zone_info": false, 00:14:28.909 "zone_management": false, 00:14:28.909 "zone_append": false, 00:14:28.909 "compare": false, 00:14:28.909 "compare_and_write": false, 00:14:28.909 "abort": false, 00:14:28.909 "seek_hole": false, 00:14:28.909 "seek_data": false, 00:14:28.909 "copy": false, 00:14:28.909 "nvme_iov_md": false 00:14:28.909 }, 00:14:28.909 "driver_specific": { 00:14:28.909 "raid": { 00:14:28.909 "uuid": "a0ac37b2-b027-4355-a0a3-f7efd9b04e9c", 00:14:28.909 "strip_size_kb": 64, 00:14:28.909 "state": "online", 00:14:28.909 "raid_level": "raid5f", 00:14:28.909 "superblock": false, 00:14:28.909 "num_base_bdevs": 3, 00:14:28.909 "num_base_bdevs_discovered": 3, 00:14:28.909 "num_base_bdevs_operational": 3, 00:14:28.909 "base_bdevs_list": [ 00:14:28.909 { 00:14:28.909 "name": "BaseBdev1", 00:14:28.909 "uuid": "f33ca43d-c534-4e63-8e46-13f63e888d5d", 00:14:28.909 "is_configured": true, 00:14:28.909 "data_offset": 0, 00:14:28.909 "data_size": 65536 00:14:28.909 }, 00:14:28.909 { 00:14:28.909 "name": "BaseBdev2", 00:14:28.909 "uuid": "bddfc6c7-11ff-4b45-85cd-4c8ee8af8f43", 00:14:28.909 "is_configured": true, 00:14:28.909 "data_offset": 0, 00:14:28.909 "data_size": 65536 00:14:28.909 }, 00:14:28.909 { 00:14:28.909 "name": "BaseBdev3", 00:14:28.909 "uuid": "da17f097-b8bc-4185-a06d-9e40a838c698", 00:14:28.909 "is_configured": true, 00:14:28.909 "data_offset": 0, 00:14:28.909 "data_size": 65536 00:14:28.909 } 00:14:28.909 ] 00:14:28.909 } 00:14:28.909 } 00:14:28.909 }' 00:14:28.909 09:51:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:28.909 09:51:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:28.909 BaseBdev2 00:14:28.909 BaseBdev3' 00:14:28.909 09:51:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:28.909 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:28.909 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:28.909 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:28.909 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:28.909 09:51:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.909 09:51:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.909 09:51:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.909 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:28.909 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:28.909 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:28.909 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:28.909 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:28.909 09:51:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.909 09:51:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.909 09:51:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.909 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:28.909 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:28.909 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:28.909 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:28.909 09:51:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.909 09:51:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.909 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:28.909 09:51:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.909 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:28.909 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:28.909 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:28.909 09:51:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.909 09:51:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.909 [2024-12-06 09:51:54.168214] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:29.170 09:51:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.170 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:29.170 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:29.170 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:29.170 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:29.170 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:29.170 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:29.170 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:29.170 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.170 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:29.170 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.170 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:29.170 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.170 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.170 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.170 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.170 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.170 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.170 09:51:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.170 09:51:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.170 09:51:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.170 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.170 "name": "Existed_Raid", 00:14:29.170 "uuid": "a0ac37b2-b027-4355-a0a3-f7efd9b04e9c", 00:14:29.170 "strip_size_kb": 64, 00:14:29.170 "state": "online", 00:14:29.170 "raid_level": "raid5f", 00:14:29.170 "superblock": false, 00:14:29.170 "num_base_bdevs": 3, 00:14:29.170 "num_base_bdevs_discovered": 2, 00:14:29.170 "num_base_bdevs_operational": 2, 00:14:29.170 "base_bdevs_list": [ 00:14:29.170 { 00:14:29.170 "name": null, 00:14:29.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.170 "is_configured": false, 00:14:29.170 "data_offset": 0, 00:14:29.170 "data_size": 65536 00:14:29.170 }, 00:14:29.170 { 00:14:29.170 "name": "BaseBdev2", 00:14:29.170 "uuid": "bddfc6c7-11ff-4b45-85cd-4c8ee8af8f43", 00:14:29.170 "is_configured": true, 00:14:29.170 "data_offset": 0, 00:14:29.170 "data_size": 65536 00:14:29.170 }, 00:14:29.170 { 00:14:29.170 "name": "BaseBdev3", 00:14:29.170 "uuid": "da17f097-b8bc-4185-a06d-9e40a838c698", 00:14:29.170 "is_configured": true, 00:14:29.170 "data_offset": 0, 00:14:29.170 "data_size": 65536 00:14:29.170 } 00:14:29.170 ] 00:14:29.170 }' 00:14:29.170 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.170 09:51:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.430 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:29.430 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:29.430 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:29.430 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.430 09:51:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.430 09:51:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.430 09:51:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.430 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:29.430 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:29.430 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:29.430 09:51:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.430 09:51:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.430 [2024-12-06 09:51:54.687938] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:29.430 [2024-12-06 09:51:54.688038] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:29.691 [2024-12-06 09:51:54.785938] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:29.691 09:51:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.691 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:29.691 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:29.692 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.692 09:51:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.692 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:29.692 09:51:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.692 09:51:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.692 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:29.692 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:29.692 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:29.692 09:51:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.692 09:51:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.692 [2024-12-06 09:51:54.845870] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:29.692 [2024-12-06 09:51:54.845921] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:29.692 09:51:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.692 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:29.692 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:29.692 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.692 09:51:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.692 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:29.692 09:51:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.692 09:51:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.953 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:29.953 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:29.953 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:29.953 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:29.953 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:29.953 09:51:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:29.953 09:51:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.953 09:51:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.953 BaseBdev2 00:14:29.953 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.953 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:29.953 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:29.953 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:29.953 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:29.953 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:29.953 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:29.953 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:29.953 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.953 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.953 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.953 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:29.953 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.953 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.953 [ 00:14:29.953 { 00:14:29.953 "name": "BaseBdev2", 00:14:29.953 "aliases": [ 00:14:29.954 "33fb7876-4060-4a3c-b01a-3f084d0d70a7" 00:14:29.954 ], 00:14:29.954 "product_name": "Malloc disk", 00:14:29.954 "block_size": 512, 00:14:29.954 "num_blocks": 65536, 00:14:29.954 "uuid": "33fb7876-4060-4a3c-b01a-3f084d0d70a7", 00:14:29.954 "assigned_rate_limits": { 00:14:29.954 "rw_ios_per_sec": 0, 00:14:29.954 "rw_mbytes_per_sec": 0, 00:14:29.954 "r_mbytes_per_sec": 0, 00:14:29.954 "w_mbytes_per_sec": 0 00:14:29.954 }, 00:14:29.954 "claimed": false, 00:14:29.954 "zoned": false, 00:14:29.954 "supported_io_types": { 00:14:29.954 "read": true, 00:14:29.954 "write": true, 00:14:29.954 "unmap": true, 00:14:29.954 "flush": true, 00:14:29.954 "reset": true, 00:14:29.954 "nvme_admin": false, 00:14:29.954 "nvme_io": false, 00:14:29.954 "nvme_io_md": false, 00:14:29.954 "write_zeroes": true, 00:14:29.954 "zcopy": true, 00:14:29.954 "get_zone_info": false, 00:14:29.954 "zone_management": false, 00:14:29.954 "zone_append": false, 00:14:29.954 "compare": false, 00:14:29.954 "compare_and_write": false, 00:14:29.954 "abort": true, 00:14:29.954 "seek_hole": false, 00:14:29.954 "seek_data": false, 00:14:29.954 "copy": true, 00:14:29.954 "nvme_iov_md": false 00:14:29.954 }, 00:14:29.954 "memory_domains": [ 00:14:29.954 { 00:14:29.954 "dma_device_id": "system", 00:14:29.954 "dma_device_type": 1 00:14:29.954 }, 00:14:29.954 { 00:14:29.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.954 "dma_device_type": 2 00:14:29.954 } 00:14:29.954 ], 00:14:29.954 "driver_specific": {} 00:14:29.954 } 00:14:29.954 ] 00:14:29.954 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.954 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:29.954 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:29.954 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:29.954 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:29.954 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.954 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.954 BaseBdev3 00:14:29.954 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.954 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:29.954 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:29.954 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:29.954 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:29.954 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:29.954 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:29.954 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:29.954 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.954 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.954 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.954 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:29.954 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.954 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.954 [ 00:14:29.954 { 00:14:29.954 "name": "BaseBdev3", 00:14:29.954 "aliases": [ 00:14:29.954 "268cba85-c3e2-4bdd-9d02-1b622b108a63" 00:14:29.954 ], 00:14:29.954 "product_name": "Malloc disk", 00:14:29.954 "block_size": 512, 00:14:29.954 "num_blocks": 65536, 00:14:29.954 "uuid": "268cba85-c3e2-4bdd-9d02-1b622b108a63", 00:14:29.954 "assigned_rate_limits": { 00:14:29.954 "rw_ios_per_sec": 0, 00:14:29.954 "rw_mbytes_per_sec": 0, 00:14:29.954 "r_mbytes_per_sec": 0, 00:14:29.954 "w_mbytes_per_sec": 0 00:14:29.954 }, 00:14:29.954 "claimed": false, 00:14:29.954 "zoned": false, 00:14:29.954 "supported_io_types": { 00:14:29.954 "read": true, 00:14:29.954 "write": true, 00:14:29.954 "unmap": true, 00:14:29.954 "flush": true, 00:14:29.954 "reset": true, 00:14:29.954 "nvme_admin": false, 00:14:29.954 "nvme_io": false, 00:14:29.954 "nvme_io_md": false, 00:14:29.954 "write_zeroes": true, 00:14:29.954 "zcopy": true, 00:14:29.954 "get_zone_info": false, 00:14:29.954 "zone_management": false, 00:14:29.954 "zone_append": false, 00:14:29.954 "compare": false, 00:14:29.954 "compare_and_write": false, 00:14:29.954 "abort": true, 00:14:29.954 "seek_hole": false, 00:14:29.954 "seek_data": false, 00:14:29.954 "copy": true, 00:14:29.954 "nvme_iov_md": false 00:14:29.954 }, 00:14:29.954 "memory_domains": [ 00:14:29.954 { 00:14:29.954 "dma_device_id": "system", 00:14:29.954 "dma_device_type": 1 00:14:29.954 }, 00:14:29.954 { 00:14:29.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.954 "dma_device_type": 2 00:14:29.954 } 00:14:29.954 ], 00:14:29.954 "driver_specific": {} 00:14:29.954 } 00:14:29.954 ] 00:14:29.954 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.954 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:29.954 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:29.954 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:29.954 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:29.954 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.954 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.954 [2024-12-06 09:51:55.156911] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:29.954 [2024-12-06 09:51:55.156956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:29.954 [2024-12-06 09:51:55.156977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:29.954 [2024-12-06 09:51:55.158753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:29.954 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.954 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:29.954 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:29.954 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:29.954 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:29.955 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.955 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:29.955 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.955 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.955 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.955 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.955 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.955 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.955 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.955 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.955 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.955 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.955 "name": "Existed_Raid", 00:14:29.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.955 "strip_size_kb": 64, 00:14:29.955 "state": "configuring", 00:14:29.955 "raid_level": "raid5f", 00:14:29.955 "superblock": false, 00:14:29.955 "num_base_bdevs": 3, 00:14:29.955 "num_base_bdevs_discovered": 2, 00:14:29.955 "num_base_bdevs_operational": 3, 00:14:29.955 "base_bdevs_list": [ 00:14:29.955 { 00:14:29.955 "name": "BaseBdev1", 00:14:29.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.955 "is_configured": false, 00:14:29.955 "data_offset": 0, 00:14:29.955 "data_size": 0 00:14:29.955 }, 00:14:29.955 { 00:14:29.955 "name": "BaseBdev2", 00:14:29.955 "uuid": "33fb7876-4060-4a3c-b01a-3f084d0d70a7", 00:14:29.955 "is_configured": true, 00:14:29.955 "data_offset": 0, 00:14:29.955 "data_size": 65536 00:14:29.955 }, 00:14:29.955 { 00:14:29.955 "name": "BaseBdev3", 00:14:29.955 "uuid": "268cba85-c3e2-4bdd-9d02-1b622b108a63", 00:14:29.955 "is_configured": true, 00:14:29.955 "data_offset": 0, 00:14:29.955 "data_size": 65536 00:14:29.955 } 00:14:29.955 ] 00:14:29.955 }' 00:14:29.955 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.955 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.523 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:30.523 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.523 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.523 [2024-12-06 09:51:55.556263] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:30.523 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.523 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:30.523 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:30.523 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.523 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.523 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.523 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.523 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.523 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.523 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.523 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.523 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.523 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.523 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.523 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.523 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.523 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.523 "name": "Existed_Raid", 00:14:30.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.524 "strip_size_kb": 64, 00:14:30.524 "state": "configuring", 00:14:30.524 "raid_level": "raid5f", 00:14:30.524 "superblock": false, 00:14:30.524 "num_base_bdevs": 3, 00:14:30.524 "num_base_bdevs_discovered": 1, 00:14:30.524 "num_base_bdevs_operational": 3, 00:14:30.524 "base_bdevs_list": [ 00:14:30.524 { 00:14:30.524 "name": "BaseBdev1", 00:14:30.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.524 "is_configured": false, 00:14:30.524 "data_offset": 0, 00:14:30.524 "data_size": 0 00:14:30.524 }, 00:14:30.524 { 00:14:30.524 "name": null, 00:14:30.524 "uuid": "33fb7876-4060-4a3c-b01a-3f084d0d70a7", 00:14:30.524 "is_configured": false, 00:14:30.524 "data_offset": 0, 00:14:30.524 "data_size": 65536 00:14:30.524 }, 00:14:30.524 { 00:14:30.524 "name": "BaseBdev3", 00:14:30.524 "uuid": "268cba85-c3e2-4bdd-9d02-1b622b108a63", 00:14:30.524 "is_configured": true, 00:14:30.524 "data_offset": 0, 00:14:30.524 "data_size": 65536 00:14:30.524 } 00:14:30.524 ] 00:14:30.524 }' 00:14:30.524 09:51:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.524 09:51:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.796 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:30.796 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.796 09:51:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.796 09:51:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.797 09:51:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.797 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:30.797 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:30.797 09:51:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.797 09:51:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.077 [2024-12-06 09:51:56.068215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:31.077 BaseBdev1 00:14:31.077 09:51:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.077 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:31.077 09:51:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:31.077 09:51:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:31.077 09:51:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:31.077 09:51:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:31.077 09:51:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:31.077 09:51:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:31.077 09:51:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.077 09:51:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.077 09:51:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.077 09:51:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:31.077 09:51:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.077 09:51:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.077 [ 00:14:31.077 { 00:14:31.077 "name": "BaseBdev1", 00:14:31.077 "aliases": [ 00:14:31.077 "16714afc-0e02-4e5a-b41f-5dcb6cccc9df" 00:14:31.077 ], 00:14:31.077 "product_name": "Malloc disk", 00:14:31.077 "block_size": 512, 00:14:31.077 "num_blocks": 65536, 00:14:31.077 "uuid": "16714afc-0e02-4e5a-b41f-5dcb6cccc9df", 00:14:31.077 "assigned_rate_limits": { 00:14:31.077 "rw_ios_per_sec": 0, 00:14:31.077 "rw_mbytes_per_sec": 0, 00:14:31.077 "r_mbytes_per_sec": 0, 00:14:31.077 "w_mbytes_per_sec": 0 00:14:31.077 }, 00:14:31.077 "claimed": true, 00:14:31.077 "claim_type": "exclusive_write", 00:14:31.077 "zoned": false, 00:14:31.077 "supported_io_types": { 00:14:31.077 "read": true, 00:14:31.077 "write": true, 00:14:31.077 "unmap": true, 00:14:31.077 "flush": true, 00:14:31.077 "reset": true, 00:14:31.077 "nvme_admin": false, 00:14:31.077 "nvme_io": false, 00:14:31.077 "nvme_io_md": false, 00:14:31.077 "write_zeroes": true, 00:14:31.077 "zcopy": true, 00:14:31.077 "get_zone_info": false, 00:14:31.077 "zone_management": false, 00:14:31.077 "zone_append": false, 00:14:31.077 "compare": false, 00:14:31.077 "compare_and_write": false, 00:14:31.077 "abort": true, 00:14:31.077 "seek_hole": false, 00:14:31.077 "seek_data": false, 00:14:31.077 "copy": true, 00:14:31.077 "nvme_iov_md": false 00:14:31.077 }, 00:14:31.077 "memory_domains": [ 00:14:31.077 { 00:14:31.077 "dma_device_id": "system", 00:14:31.077 "dma_device_type": 1 00:14:31.077 }, 00:14:31.077 { 00:14:31.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.077 "dma_device_type": 2 00:14:31.077 } 00:14:31.077 ], 00:14:31.077 "driver_specific": {} 00:14:31.077 } 00:14:31.077 ] 00:14:31.077 09:51:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.077 09:51:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:31.077 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:31.077 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.077 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.077 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.077 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.077 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.077 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.077 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.077 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.077 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.077 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.077 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.077 09:51:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.077 09:51:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.077 09:51:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.077 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.077 "name": "Existed_Raid", 00:14:31.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.077 "strip_size_kb": 64, 00:14:31.077 "state": "configuring", 00:14:31.077 "raid_level": "raid5f", 00:14:31.077 "superblock": false, 00:14:31.077 "num_base_bdevs": 3, 00:14:31.077 "num_base_bdevs_discovered": 2, 00:14:31.077 "num_base_bdevs_operational": 3, 00:14:31.077 "base_bdevs_list": [ 00:14:31.077 { 00:14:31.077 "name": "BaseBdev1", 00:14:31.077 "uuid": "16714afc-0e02-4e5a-b41f-5dcb6cccc9df", 00:14:31.077 "is_configured": true, 00:14:31.077 "data_offset": 0, 00:14:31.077 "data_size": 65536 00:14:31.077 }, 00:14:31.077 { 00:14:31.077 "name": null, 00:14:31.077 "uuid": "33fb7876-4060-4a3c-b01a-3f084d0d70a7", 00:14:31.077 "is_configured": false, 00:14:31.077 "data_offset": 0, 00:14:31.077 "data_size": 65536 00:14:31.077 }, 00:14:31.077 { 00:14:31.077 "name": "BaseBdev3", 00:14:31.077 "uuid": "268cba85-c3e2-4bdd-9d02-1b622b108a63", 00:14:31.077 "is_configured": true, 00:14:31.077 "data_offset": 0, 00:14:31.077 "data_size": 65536 00:14:31.077 } 00:14:31.077 ] 00:14:31.077 }' 00:14:31.077 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.077 09:51:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.338 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:31.338 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.338 09:51:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.338 09:51:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.338 09:51:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.338 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:31.338 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:31.338 09:51:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.338 09:51:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.338 [2024-12-06 09:51:56.595385] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:31.338 09:51:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.338 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:31.338 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.338 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.338 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.338 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.338 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.338 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.338 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.338 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.338 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.338 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.338 09:51:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.338 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.338 09:51:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.598 09:51:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.598 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.598 "name": "Existed_Raid", 00:14:31.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.598 "strip_size_kb": 64, 00:14:31.598 "state": "configuring", 00:14:31.598 "raid_level": "raid5f", 00:14:31.598 "superblock": false, 00:14:31.598 "num_base_bdevs": 3, 00:14:31.598 "num_base_bdevs_discovered": 1, 00:14:31.598 "num_base_bdevs_operational": 3, 00:14:31.598 "base_bdevs_list": [ 00:14:31.598 { 00:14:31.598 "name": "BaseBdev1", 00:14:31.598 "uuid": "16714afc-0e02-4e5a-b41f-5dcb6cccc9df", 00:14:31.598 "is_configured": true, 00:14:31.598 "data_offset": 0, 00:14:31.598 "data_size": 65536 00:14:31.598 }, 00:14:31.598 { 00:14:31.598 "name": null, 00:14:31.598 "uuid": "33fb7876-4060-4a3c-b01a-3f084d0d70a7", 00:14:31.598 "is_configured": false, 00:14:31.598 "data_offset": 0, 00:14:31.598 "data_size": 65536 00:14:31.598 }, 00:14:31.598 { 00:14:31.598 "name": null, 00:14:31.598 "uuid": "268cba85-c3e2-4bdd-9d02-1b622b108a63", 00:14:31.598 "is_configured": false, 00:14:31.598 "data_offset": 0, 00:14:31.598 "data_size": 65536 00:14:31.598 } 00:14:31.598 ] 00:14:31.598 }' 00:14:31.598 09:51:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.598 09:51:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.859 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.859 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:31.859 09:51:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.859 09:51:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.859 09:51:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.859 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:31.859 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:31.859 09:51:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.859 09:51:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.859 [2024-12-06 09:51:57.078631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:31.859 09:51:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.859 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:31.859 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.859 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.859 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.859 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.859 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.859 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.859 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.859 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.859 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.859 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.859 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.859 09:51:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.859 09:51:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.859 09:51:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.119 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.119 "name": "Existed_Raid", 00:14:32.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.119 "strip_size_kb": 64, 00:14:32.119 "state": "configuring", 00:14:32.119 "raid_level": "raid5f", 00:14:32.119 "superblock": false, 00:14:32.119 "num_base_bdevs": 3, 00:14:32.119 "num_base_bdevs_discovered": 2, 00:14:32.119 "num_base_bdevs_operational": 3, 00:14:32.119 "base_bdevs_list": [ 00:14:32.119 { 00:14:32.119 "name": "BaseBdev1", 00:14:32.119 "uuid": "16714afc-0e02-4e5a-b41f-5dcb6cccc9df", 00:14:32.119 "is_configured": true, 00:14:32.119 "data_offset": 0, 00:14:32.119 "data_size": 65536 00:14:32.119 }, 00:14:32.119 { 00:14:32.119 "name": null, 00:14:32.119 "uuid": "33fb7876-4060-4a3c-b01a-3f084d0d70a7", 00:14:32.119 "is_configured": false, 00:14:32.119 "data_offset": 0, 00:14:32.119 "data_size": 65536 00:14:32.119 }, 00:14:32.119 { 00:14:32.119 "name": "BaseBdev3", 00:14:32.119 "uuid": "268cba85-c3e2-4bdd-9d02-1b622b108a63", 00:14:32.119 "is_configured": true, 00:14:32.119 "data_offset": 0, 00:14:32.119 "data_size": 65536 00:14:32.119 } 00:14:32.119 ] 00:14:32.119 }' 00:14:32.119 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.119 09:51:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.379 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.379 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:32.379 09:51:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.379 09:51:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.379 09:51:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.379 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:32.379 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:32.379 09:51:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.379 09:51:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.379 [2024-12-06 09:51:57.609761] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:32.639 09:51:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.639 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:32.639 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.639 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.639 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.639 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.639 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.639 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.639 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.639 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.639 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.639 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.639 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.639 09:51:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.639 09:51:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.639 09:51:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.639 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.639 "name": "Existed_Raid", 00:14:32.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.639 "strip_size_kb": 64, 00:14:32.639 "state": "configuring", 00:14:32.639 "raid_level": "raid5f", 00:14:32.639 "superblock": false, 00:14:32.639 "num_base_bdevs": 3, 00:14:32.639 "num_base_bdevs_discovered": 1, 00:14:32.639 "num_base_bdevs_operational": 3, 00:14:32.639 "base_bdevs_list": [ 00:14:32.639 { 00:14:32.639 "name": null, 00:14:32.639 "uuid": "16714afc-0e02-4e5a-b41f-5dcb6cccc9df", 00:14:32.639 "is_configured": false, 00:14:32.639 "data_offset": 0, 00:14:32.639 "data_size": 65536 00:14:32.639 }, 00:14:32.639 { 00:14:32.640 "name": null, 00:14:32.640 "uuid": "33fb7876-4060-4a3c-b01a-3f084d0d70a7", 00:14:32.640 "is_configured": false, 00:14:32.640 "data_offset": 0, 00:14:32.640 "data_size": 65536 00:14:32.640 }, 00:14:32.640 { 00:14:32.640 "name": "BaseBdev3", 00:14:32.640 "uuid": "268cba85-c3e2-4bdd-9d02-1b622b108a63", 00:14:32.640 "is_configured": true, 00:14:32.640 "data_offset": 0, 00:14:32.640 "data_size": 65536 00:14:32.640 } 00:14:32.640 ] 00:14:32.640 }' 00:14:32.640 09:51:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.640 09:51:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.209 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.209 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.209 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.209 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:33.209 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.209 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:33.209 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:33.209 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.209 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.209 [2024-12-06 09:51:58.240180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:33.209 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.209 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:33.209 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.209 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:33.209 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.209 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.209 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.209 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.209 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.209 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.209 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.209 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.209 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.209 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.209 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.209 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.209 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.209 "name": "Existed_Raid", 00:14:33.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.209 "strip_size_kb": 64, 00:14:33.209 "state": "configuring", 00:14:33.209 "raid_level": "raid5f", 00:14:33.209 "superblock": false, 00:14:33.209 "num_base_bdevs": 3, 00:14:33.209 "num_base_bdevs_discovered": 2, 00:14:33.209 "num_base_bdevs_operational": 3, 00:14:33.209 "base_bdevs_list": [ 00:14:33.209 { 00:14:33.209 "name": null, 00:14:33.209 "uuid": "16714afc-0e02-4e5a-b41f-5dcb6cccc9df", 00:14:33.209 "is_configured": false, 00:14:33.209 "data_offset": 0, 00:14:33.209 "data_size": 65536 00:14:33.209 }, 00:14:33.209 { 00:14:33.209 "name": "BaseBdev2", 00:14:33.209 "uuid": "33fb7876-4060-4a3c-b01a-3f084d0d70a7", 00:14:33.209 "is_configured": true, 00:14:33.209 "data_offset": 0, 00:14:33.209 "data_size": 65536 00:14:33.209 }, 00:14:33.209 { 00:14:33.209 "name": "BaseBdev3", 00:14:33.209 "uuid": "268cba85-c3e2-4bdd-9d02-1b622b108a63", 00:14:33.209 "is_configured": true, 00:14:33.209 "data_offset": 0, 00:14:33.209 "data_size": 65536 00:14:33.209 } 00:14:33.209 ] 00:14:33.209 }' 00:14:33.209 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.209 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.468 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.468 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.468 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.468 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:33.468 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.468 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:33.468 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.468 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.468 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.468 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:33.468 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.728 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 16714afc-0e02-4e5a-b41f-5dcb6cccc9df 00:14:33.728 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.728 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.728 [2024-12-06 09:51:58.793745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:33.728 [2024-12-06 09:51:58.793874] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:33.728 [2024-12-06 09:51:58.793902] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:33.728 [2024-12-06 09:51:58.794206] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:33.728 [2024-12-06 09:51:58.799771] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:33.728 [2024-12-06 09:51:58.799827] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:33.728 [2024-12-06 09:51:58.800191] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.728 NewBaseBdev 00:14:33.728 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.728 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:33.728 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:33.728 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:33.728 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:33.728 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:33.728 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:33.728 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:33.728 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.728 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.728 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.728 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:33.728 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.728 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.728 [ 00:14:33.728 { 00:14:33.728 "name": "NewBaseBdev", 00:14:33.728 "aliases": [ 00:14:33.728 "16714afc-0e02-4e5a-b41f-5dcb6cccc9df" 00:14:33.728 ], 00:14:33.728 "product_name": "Malloc disk", 00:14:33.728 "block_size": 512, 00:14:33.728 "num_blocks": 65536, 00:14:33.728 "uuid": "16714afc-0e02-4e5a-b41f-5dcb6cccc9df", 00:14:33.728 "assigned_rate_limits": { 00:14:33.728 "rw_ios_per_sec": 0, 00:14:33.728 "rw_mbytes_per_sec": 0, 00:14:33.728 "r_mbytes_per_sec": 0, 00:14:33.728 "w_mbytes_per_sec": 0 00:14:33.728 }, 00:14:33.728 "claimed": true, 00:14:33.728 "claim_type": "exclusive_write", 00:14:33.728 "zoned": false, 00:14:33.728 "supported_io_types": { 00:14:33.728 "read": true, 00:14:33.728 "write": true, 00:14:33.728 "unmap": true, 00:14:33.728 "flush": true, 00:14:33.728 "reset": true, 00:14:33.728 "nvme_admin": false, 00:14:33.728 "nvme_io": false, 00:14:33.728 "nvme_io_md": false, 00:14:33.728 "write_zeroes": true, 00:14:33.728 "zcopy": true, 00:14:33.728 "get_zone_info": false, 00:14:33.728 "zone_management": false, 00:14:33.728 "zone_append": false, 00:14:33.728 "compare": false, 00:14:33.728 "compare_and_write": false, 00:14:33.728 "abort": true, 00:14:33.728 "seek_hole": false, 00:14:33.728 "seek_data": false, 00:14:33.728 "copy": true, 00:14:33.728 "nvme_iov_md": false 00:14:33.728 }, 00:14:33.728 "memory_domains": [ 00:14:33.728 { 00:14:33.728 "dma_device_id": "system", 00:14:33.728 "dma_device_type": 1 00:14:33.728 }, 00:14:33.728 { 00:14:33.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.728 "dma_device_type": 2 00:14:33.728 } 00:14:33.728 ], 00:14:33.728 "driver_specific": {} 00:14:33.728 } 00:14:33.728 ] 00:14:33.728 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.728 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:33.729 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:33.729 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.729 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.729 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.729 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.729 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.729 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.729 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.729 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.729 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.729 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.729 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.729 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.729 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.729 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.729 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.729 "name": "Existed_Raid", 00:14:33.729 "uuid": "33acb799-c089-4109-9eed-a2e59482faa3", 00:14:33.729 "strip_size_kb": 64, 00:14:33.729 "state": "online", 00:14:33.729 "raid_level": "raid5f", 00:14:33.729 "superblock": false, 00:14:33.729 "num_base_bdevs": 3, 00:14:33.729 "num_base_bdevs_discovered": 3, 00:14:33.729 "num_base_bdevs_operational": 3, 00:14:33.729 "base_bdevs_list": [ 00:14:33.729 { 00:14:33.729 "name": "NewBaseBdev", 00:14:33.729 "uuid": "16714afc-0e02-4e5a-b41f-5dcb6cccc9df", 00:14:33.729 "is_configured": true, 00:14:33.729 "data_offset": 0, 00:14:33.729 "data_size": 65536 00:14:33.729 }, 00:14:33.729 { 00:14:33.729 "name": "BaseBdev2", 00:14:33.729 "uuid": "33fb7876-4060-4a3c-b01a-3f084d0d70a7", 00:14:33.729 "is_configured": true, 00:14:33.729 "data_offset": 0, 00:14:33.729 "data_size": 65536 00:14:33.729 }, 00:14:33.729 { 00:14:33.729 "name": "BaseBdev3", 00:14:33.729 "uuid": "268cba85-c3e2-4bdd-9d02-1b622b108a63", 00:14:33.729 "is_configured": true, 00:14:33.729 "data_offset": 0, 00:14:33.729 "data_size": 65536 00:14:33.729 } 00:14:33.729 ] 00:14:33.729 }' 00:14:33.729 09:51:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.729 09:51:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.299 09:51:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:34.299 09:51:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:34.299 09:51:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:34.299 09:51:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:34.299 09:51:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:34.299 09:51:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:34.299 09:51:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:34.299 09:51:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:34.299 09:51:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.299 09:51:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.299 [2024-12-06 09:51:59.306231] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:34.299 09:51:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.299 09:51:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:34.299 "name": "Existed_Raid", 00:14:34.299 "aliases": [ 00:14:34.299 "33acb799-c089-4109-9eed-a2e59482faa3" 00:14:34.299 ], 00:14:34.299 "product_name": "Raid Volume", 00:14:34.299 "block_size": 512, 00:14:34.299 "num_blocks": 131072, 00:14:34.299 "uuid": "33acb799-c089-4109-9eed-a2e59482faa3", 00:14:34.299 "assigned_rate_limits": { 00:14:34.299 "rw_ios_per_sec": 0, 00:14:34.299 "rw_mbytes_per_sec": 0, 00:14:34.299 "r_mbytes_per_sec": 0, 00:14:34.299 "w_mbytes_per_sec": 0 00:14:34.299 }, 00:14:34.299 "claimed": false, 00:14:34.299 "zoned": false, 00:14:34.299 "supported_io_types": { 00:14:34.299 "read": true, 00:14:34.299 "write": true, 00:14:34.299 "unmap": false, 00:14:34.299 "flush": false, 00:14:34.299 "reset": true, 00:14:34.299 "nvme_admin": false, 00:14:34.299 "nvme_io": false, 00:14:34.299 "nvme_io_md": false, 00:14:34.299 "write_zeroes": true, 00:14:34.299 "zcopy": false, 00:14:34.299 "get_zone_info": false, 00:14:34.299 "zone_management": false, 00:14:34.299 "zone_append": false, 00:14:34.299 "compare": false, 00:14:34.299 "compare_and_write": false, 00:14:34.299 "abort": false, 00:14:34.299 "seek_hole": false, 00:14:34.299 "seek_data": false, 00:14:34.299 "copy": false, 00:14:34.299 "nvme_iov_md": false 00:14:34.299 }, 00:14:34.299 "driver_specific": { 00:14:34.299 "raid": { 00:14:34.299 "uuid": "33acb799-c089-4109-9eed-a2e59482faa3", 00:14:34.299 "strip_size_kb": 64, 00:14:34.299 "state": "online", 00:14:34.299 "raid_level": "raid5f", 00:14:34.299 "superblock": false, 00:14:34.299 "num_base_bdevs": 3, 00:14:34.299 "num_base_bdevs_discovered": 3, 00:14:34.299 "num_base_bdevs_operational": 3, 00:14:34.299 "base_bdevs_list": [ 00:14:34.299 { 00:14:34.299 "name": "NewBaseBdev", 00:14:34.299 "uuid": "16714afc-0e02-4e5a-b41f-5dcb6cccc9df", 00:14:34.299 "is_configured": true, 00:14:34.299 "data_offset": 0, 00:14:34.299 "data_size": 65536 00:14:34.299 }, 00:14:34.299 { 00:14:34.299 "name": "BaseBdev2", 00:14:34.299 "uuid": "33fb7876-4060-4a3c-b01a-3f084d0d70a7", 00:14:34.299 "is_configured": true, 00:14:34.299 "data_offset": 0, 00:14:34.299 "data_size": 65536 00:14:34.299 }, 00:14:34.299 { 00:14:34.299 "name": "BaseBdev3", 00:14:34.299 "uuid": "268cba85-c3e2-4bdd-9d02-1b622b108a63", 00:14:34.299 "is_configured": true, 00:14:34.299 "data_offset": 0, 00:14:34.299 "data_size": 65536 00:14:34.299 } 00:14:34.299 ] 00:14:34.299 } 00:14:34.299 } 00:14:34.299 }' 00:14:34.299 09:51:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:34.299 09:51:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:34.299 BaseBdev2 00:14:34.299 BaseBdev3' 00:14:34.299 09:51:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:34.299 09:51:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:34.300 09:51:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:34.300 09:51:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:34.300 09:51:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:34.300 09:51:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.300 09:51:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.300 09:51:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.300 09:51:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:34.300 09:51:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:34.300 09:51:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:34.300 09:51:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:34.300 09:51:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.300 09:51:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.300 09:51:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:34.300 09:51:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.300 09:51:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:34.300 09:51:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:34.300 09:51:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:34.300 09:51:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:34.300 09:51:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:34.300 09:51:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.300 09:51:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.300 09:51:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.560 09:51:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:34.560 09:51:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:34.560 09:51:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:34.560 09:51:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.560 09:51:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.560 [2024-12-06 09:51:59.597507] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:34.560 [2024-12-06 09:51:59.597584] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:34.560 [2024-12-06 09:51:59.597685] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:34.560 [2024-12-06 09:51:59.598020] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:34.560 [2024-12-06 09:51:59.598093] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:34.560 09:51:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.560 09:51:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79766 00:14:34.560 09:51:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 79766 ']' 00:14:34.560 09:51:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 79766 00:14:34.560 09:51:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:34.560 09:51:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:34.560 09:51:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79766 00:14:34.560 09:51:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:34.560 09:51:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:34.560 killing process with pid 79766 00:14:34.560 09:51:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79766' 00:14:34.560 09:51:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 79766 00:14:34.560 [2024-12-06 09:51:59.647895] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:34.560 09:51:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 79766 00:14:34.820 [2024-12-06 09:51:59.949433] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:36.199 00:14:36.199 real 0m10.665s 00:14:36.199 user 0m16.927s 00:14:36.199 sys 0m1.965s 00:14:36.199 ************************************ 00:14:36.199 END TEST raid5f_state_function_test 00:14:36.199 ************************************ 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.199 09:52:01 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:14:36.199 09:52:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:36.199 09:52:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:36.199 09:52:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:36.199 ************************************ 00:14:36.199 START TEST raid5f_state_function_test_sb 00:14:36.199 ************************************ 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80387 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:36.199 Process raid pid: 80387 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80387' 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80387 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80387 ']' 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:36.199 09:52:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.199 [2024-12-06 09:52:01.245991] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:14:36.199 [2024-12-06 09:52:01.246237] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:36.199 [2024-12-06 09:52:01.421045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.458 [2024-12-06 09:52:01.539612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.717 [2024-12-06 09:52:01.740048] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.717 [2024-12-06 09:52:01.740156] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.977 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:36.977 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:36.977 09:52:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:36.977 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.977 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.977 [2024-12-06 09:52:02.096864] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:36.977 [2024-12-06 09:52:02.096971] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:36.977 [2024-12-06 09:52:02.097039] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:36.977 [2024-12-06 09:52:02.097104] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:36.977 [2024-12-06 09:52:02.097149] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:36.977 [2024-12-06 09:52:02.097194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:36.977 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.977 09:52:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:36.977 09:52:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.977 09:52:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.977 09:52:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.977 09:52:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.977 09:52:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.977 09:52:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.977 09:52:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.977 09:52:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.977 09:52:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.977 09:52:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.977 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.977 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.977 09:52:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.977 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.977 09:52:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.977 "name": "Existed_Raid", 00:14:36.977 "uuid": "325e2fe5-71aa-457a-8e12-a085c10cceed", 00:14:36.977 "strip_size_kb": 64, 00:14:36.977 "state": "configuring", 00:14:36.977 "raid_level": "raid5f", 00:14:36.977 "superblock": true, 00:14:36.977 "num_base_bdevs": 3, 00:14:36.977 "num_base_bdevs_discovered": 0, 00:14:36.977 "num_base_bdevs_operational": 3, 00:14:36.977 "base_bdevs_list": [ 00:14:36.977 { 00:14:36.977 "name": "BaseBdev1", 00:14:36.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.977 "is_configured": false, 00:14:36.977 "data_offset": 0, 00:14:36.977 "data_size": 0 00:14:36.977 }, 00:14:36.977 { 00:14:36.977 "name": "BaseBdev2", 00:14:36.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.977 "is_configured": false, 00:14:36.977 "data_offset": 0, 00:14:36.977 "data_size": 0 00:14:36.977 }, 00:14:36.977 { 00:14:36.977 "name": "BaseBdev3", 00:14:36.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.977 "is_configured": false, 00:14:36.977 "data_offset": 0, 00:14:36.977 "data_size": 0 00:14:36.977 } 00:14:36.977 ] 00:14:36.977 }' 00:14:36.977 09:52:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.977 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.546 09:52:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:37.546 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.546 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.546 [2024-12-06 09:52:02.576014] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:37.546 [2024-12-06 09:52:02.576052] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:37.546 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.546 09:52:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:37.546 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.546 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.546 [2024-12-06 09:52:02.588013] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:37.546 [2024-12-06 09:52:02.588063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:37.546 [2024-12-06 09:52:02.588072] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:37.546 [2024-12-06 09:52:02.588098] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:37.546 [2024-12-06 09:52:02.588105] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:37.546 [2024-12-06 09:52:02.588115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:37.546 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.546 09:52:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:37.546 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.546 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.546 [2024-12-06 09:52:02.636232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:37.546 BaseBdev1 00:14:37.546 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.546 09:52:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:37.546 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:37.546 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:37.546 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:37.546 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:37.547 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:37.547 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:37.547 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.547 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.547 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.547 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:37.547 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.547 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.547 [ 00:14:37.547 { 00:14:37.547 "name": "BaseBdev1", 00:14:37.547 "aliases": [ 00:14:37.547 "07d0fd60-bb8c-4329-a186-b227257f817d" 00:14:37.547 ], 00:14:37.547 "product_name": "Malloc disk", 00:14:37.547 "block_size": 512, 00:14:37.547 "num_blocks": 65536, 00:14:37.547 "uuid": "07d0fd60-bb8c-4329-a186-b227257f817d", 00:14:37.547 "assigned_rate_limits": { 00:14:37.547 "rw_ios_per_sec": 0, 00:14:37.547 "rw_mbytes_per_sec": 0, 00:14:37.547 "r_mbytes_per_sec": 0, 00:14:37.547 "w_mbytes_per_sec": 0 00:14:37.547 }, 00:14:37.547 "claimed": true, 00:14:37.547 "claim_type": "exclusive_write", 00:14:37.547 "zoned": false, 00:14:37.547 "supported_io_types": { 00:14:37.547 "read": true, 00:14:37.547 "write": true, 00:14:37.547 "unmap": true, 00:14:37.547 "flush": true, 00:14:37.547 "reset": true, 00:14:37.547 "nvme_admin": false, 00:14:37.547 "nvme_io": false, 00:14:37.547 "nvme_io_md": false, 00:14:37.547 "write_zeroes": true, 00:14:37.547 "zcopy": true, 00:14:37.547 "get_zone_info": false, 00:14:37.547 "zone_management": false, 00:14:37.547 "zone_append": false, 00:14:37.547 "compare": false, 00:14:37.547 "compare_and_write": false, 00:14:37.547 "abort": true, 00:14:37.547 "seek_hole": false, 00:14:37.547 "seek_data": false, 00:14:37.547 "copy": true, 00:14:37.547 "nvme_iov_md": false 00:14:37.547 }, 00:14:37.547 "memory_domains": [ 00:14:37.547 { 00:14:37.547 "dma_device_id": "system", 00:14:37.547 "dma_device_type": 1 00:14:37.547 }, 00:14:37.547 { 00:14:37.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.547 "dma_device_type": 2 00:14:37.547 } 00:14:37.547 ], 00:14:37.547 "driver_specific": {} 00:14:37.547 } 00:14:37.547 ] 00:14:37.547 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.547 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:37.547 09:52:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:37.547 09:52:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.547 09:52:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:37.547 09:52:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.547 09:52:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.547 09:52:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.547 09:52:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.547 09:52:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.547 09:52:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.547 09:52:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.547 09:52:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.547 09:52:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.547 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.547 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.547 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.547 09:52:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.547 "name": "Existed_Raid", 00:14:37.547 "uuid": "7a582172-545e-4f24-ab2d-7a42599a3afc", 00:14:37.547 "strip_size_kb": 64, 00:14:37.547 "state": "configuring", 00:14:37.547 "raid_level": "raid5f", 00:14:37.547 "superblock": true, 00:14:37.547 "num_base_bdevs": 3, 00:14:37.547 "num_base_bdevs_discovered": 1, 00:14:37.547 "num_base_bdevs_operational": 3, 00:14:37.547 "base_bdevs_list": [ 00:14:37.547 { 00:14:37.547 "name": "BaseBdev1", 00:14:37.547 "uuid": "07d0fd60-bb8c-4329-a186-b227257f817d", 00:14:37.547 "is_configured": true, 00:14:37.547 "data_offset": 2048, 00:14:37.547 "data_size": 63488 00:14:37.547 }, 00:14:37.547 { 00:14:37.547 "name": "BaseBdev2", 00:14:37.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.547 "is_configured": false, 00:14:37.547 "data_offset": 0, 00:14:37.547 "data_size": 0 00:14:37.547 }, 00:14:37.547 { 00:14:37.547 "name": "BaseBdev3", 00:14:37.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.547 "is_configured": false, 00:14:37.547 "data_offset": 0, 00:14:37.547 "data_size": 0 00:14:37.547 } 00:14:37.547 ] 00:14:37.547 }' 00:14:37.547 09:52:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.547 09:52:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.116 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:38.116 09:52:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.116 09:52:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.116 [2024-12-06 09:52:03.131478] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:38.116 [2024-12-06 09:52:03.131533] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:38.116 09:52:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.116 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:38.116 09:52:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.116 09:52:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.116 [2024-12-06 09:52:03.143516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:38.116 [2024-12-06 09:52:03.145352] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:38.116 [2024-12-06 09:52:03.145396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:38.116 [2024-12-06 09:52:03.145407] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:38.116 [2024-12-06 09:52:03.145415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:38.116 09:52:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.116 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:38.116 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:38.116 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:38.116 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.116 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:38.116 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.116 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.116 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.116 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.116 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.116 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.117 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.117 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.117 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.117 09:52:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.117 09:52:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.117 09:52:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.117 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.117 "name": "Existed_Raid", 00:14:38.117 "uuid": "9152570d-06e2-45c5-92e0-0519b379b3d5", 00:14:38.117 "strip_size_kb": 64, 00:14:38.117 "state": "configuring", 00:14:38.117 "raid_level": "raid5f", 00:14:38.117 "superblock": true, 00:14:38.117 "num_base_bdevs": 3, 00:14:38.117 "num_base_bdevs_discovered": 1, 00:14:38.117 "num_base_bdevs_operational": 3, 00:14:38.117 "base_bdevs_list": [ 00:14:38.117 { 00:14:38.117 "name": "BaseBdev1", 00:14:38.117 "uuid": "07d0fd60-bb8c-4329-a186-b227257f817d", 00:14:38.117 "is_configured": true, 00:14:38.117 "data_offset": 2048, 00:14:38.117 "data_size": 63488 00:14:38.117 }, 00:14:38.117 { 00:14:38.117 "name": "BaseBdev2", 00:14:38.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.117 "is_configured": false, 00:14:38.117 "data_offset": 0, 00:14:38.117 "data_size": 0 00:14:38.117 }, 00:14:38.117 { 00:14:38.117 "name": "BaseBdev3", 00:14:38.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.117 "is_configured": false, 00:14:38.117 "data_offset": 0, 00:14:38.117 "data_size": 0 00:14:38.117 } 00:14:38.117 ] 00:14:38.117 }' 00:14:38.117 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.117 09:52:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.376 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:38.376 09:52:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.376 09:52:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.634 [2024-12-06 09:52:03.660246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:38.634 BaseBdev2 00:14:38.634 09:52:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.634 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:38.634 09:52:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:38.634 09:52:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:38.634 09:52:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:38.634 09:52:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:38.634 09:52:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:38.634 09:52:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:38.634 09:52:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.634 09:52:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.634 09:52:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.634 09:52:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:38.634 09:52:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.634 09:52:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.634 [ 00:14:38.634 { 00:14:38.634 "name": "BaseBdev2", 00:14:38.634 "aliases": [ 00:14:38.634 "f20c3164-42fa-4392-9829-e9771b4c8a7a" 00:14:38.634 ], 00:14:38.634 "product_name": "Malloc disk", 00:14:38.634 "block_size": 512, 00:14:38.634 "num_blocks": 65536, 00:14:38.634 "uuid": "f20c3164-42fa-4392-9829-e9771b4c8a7a", 00:14:38.634 "assigned_rate_limits": { 00:14:38.634 "rw_ios_per_sec": 0, 00:14:38.634 "rw_mbytes_per_sec": 0, 00:14:38.634 "r_mbytes_per_sec": 0, 00:14:38.634 "w_mbytes_per_sec": 0 00:14:38.634 }, 00:14:38.634 "claimed": true, 00:14:38.634 "claim_type": "exclusive_write", 00:14:38.634 "zoned": false, 00:14:38.634 "supported_io_types": { 00:14:38.634 "read": true, 00:14:38.634 "write": true, 00:14:38.634 "unmap": true, 00:14:38.634 "flush": true, 00:14:38.634 "reset": true, 00:14:38.634 "nvme_admin": false, 00:14:38.634 "nvme_io": false, 00:14:38.634 "nvme_io_md": false, 00:14:38.634 "write_zeroes": true, 00:14:38.634 "zcopy": true, 00:14:38.634 "get_zone_info": false, 00:14:38.634 "zone_management": false, 00:14:38.634 "zone_append": false, 00:14:38.634 "compare": false, 00:14:38.634 "compare_and_write": false, 00:14:38.634 "abort": true, 00:14:38.634 "seek_hole": false, 00:14:38.634 "seek_data": false, 00:14:38.634 "copy": true, 00:14:38.634 "nvme_iov_md": false 00:14:38.634 }, 00:14:38.634 "memory_domains": [ 00:14:38.634 { 00:14:38.634 "dma_device_id": "system", 00:14:38.634 "dma_device_type": 1 00:14:38.634 }, 00:14:38.634 { 00:14:38.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.634 "dma_device_type": 2 00:14:38.634 } 00:14:38.634 ], 00:14:38.634 "driver_specific": {} 00:14:38.634 } 00:14:38.634 ] 00:14:38.634 09:52:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.634 09:52:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:38.634 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:38.634 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:38.634 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:38.634 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.634 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:38.634 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.634 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.634 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.634 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.634 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.634 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.634 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.634 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.634 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.634 09:52:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.634 09:52:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.634 09:52:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.634 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.634 "name": "Existed_Raid", 00:14:38.635 "uuid": "9152570d-06e2-45c5-92e0-0519b379b3d5", 00:14:38.635 "strip_size_kb": 64, 00:14:38.635 "state": "configuring", 00:14:38.635 "raid_level": "raid5f", 00:14:38.635 "superblock": true, 00:14:38.635 "num_base_bdevs": 3, 00:14:38.635 "num_base_bdevs_discovered": 2, 00:14:38.635 "num_base_bdevs_operational": 3, 00:14:38.635 "base_bdevs_list": [ 00:14:38.635 { 00:14:38.635 "name": "BaseBdev1", 00:14:38.635 "uuid": "07d0fd60-bb8c-4329-a186-b227257f817d", 00:14:38.635 "is_configured": true, 00:14:38.635 "data_offset": 2048, 00:14:38.635 "data_size": 63488 00:14:38.635 }, 00:14:38.635 { 00:14:38.635 "name": "BaseBdev2", 00:14:38.635 "uuid": "f20c3164-42fa-4392-9829-e9771b4c8a7a", 00:14:38.635 "is_configured": true, 00:14:38.635 "data_offset": 2048, 00:14:38.635 "data_size": 63488 00:14:38.635 }, 00:14:38.635 { 00:14:38.635 "name": "BaseBdev3", 00:14:38.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.635 "is_configured": false, 00:14:38.635 "data_offset": 0, 00:14:38.635 "data_size": 0 00:14:38.635 } 00:14:38.635 ] 00:14:38.635 }' 00:14:38.635 09:52:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.635 09:52:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.894 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:38.894 09:52:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.894 09:52:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.153 [2024-12-06 09:52:04.175001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:39.153 [2024-12-06 09:52:04.175429] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:39.153 [2024-12-06 09:52:04.175457] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:39.153 BaseBdev3 00:14:39.153 [2024-12-06 09:52:04.175940] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:39.153 09:52:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.153 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:39.153 09:52:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:39.153 09:52:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:39.153 09:52:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:39.153 09:52:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:39.153 09:52:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:39.153 09:52:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:39.153 09:52:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.153 09:52:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.153 [2024-12-06 09:52:04.181549] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:39.153 [2024-12-06 09:52:04.181611] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:39.153 [2024-12-06 09:52:04.181927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.153 09:52:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.153 09:52:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:39.153 09:52:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.153 09:52:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.153 [ 00:14:39.153 { 00:14:39.153 "name": "BaseBdev3", 00:14:39.153 "aliases": [ 00:14:39.153 "e3882789-34fa-4b3a-9f39-a7bbcd7e00ba" 00:14:39.153 ], 00:14:39.153 "product_name": "Malloc disk", 00:14:39.153 "block_size": 512, 00:14:39.153 "num_blocks": 65536, 00:14:39.153 "uuid": "e3882789-34fa-4b3a-9f39-a7bbcd7e00ba", 00:14:39.153 "assigned_rate_limits": { 00:14:39.153 "rw_ios_per_sec": 0, 00:14:39.153 "rw_mbytes_per_sec": 0, 00:14:39.153 "r_mbytes_per_sec": 0, 00:14:39.153 "w_mbytes_per_sec": 0 00:14:39.153 }, 00:14:39.153 "claimed": true, 00:14:39.153 "claim_type": "exclusive_write", 00:14:39.153 "zoned": false, 00:14:39.153 "supported_io_types": { 00:14:39.153 "read": true, 00:14:39.153 "write": true, 00:14:39.153 "unmap": true, 00:14:39.153 "flush": true, 00:14:39.153 "reset": true, 00:14:39.153 "nvme_admin": false, 00:14:39.153 "nvme_io": false, 00:14:39.153 "nvme_io_md": false, 00:14:39.153 "write_zeroes": true, 00:14:39.153 "zcopy": true, 00:14:39.153 "get_zone_info": false, 00:14:39.153 "zone_management": false, 00:14:39.153 "zone_append": false, 00:14:39.153 "compare": false, 00:14:39.153 "compare_and_write": false, 00:14:39.153 "abort": true, 00:14:39.153 "seek_hole": false, 00:14:39.153 "seek_data": false, 00:14:39.153 "copy": true, 00:14:39.153 "nvme_iov_md": false 00:14:39.153 }, 00:14:39.153 "memory_domains": [ 00:14:39.153 { 00:14:39.153 "dma_device_id": "system", 00:14:39.153 "dma_device_type": 1 00:14:39.153 }, 00:14:39.153 { 00:14:39.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.153 "dma_device_type": 2 00:14:39.153 } 00:14:39.153 ], 00:14:39.153 "driver_specific": {} 00:14:39.153 } 00:14:39.153 ] 00:14:39.153 09:52:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.153 09:52:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:39.153 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:39.153 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:39.153 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:39.153 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:39.153 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.153 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:39.153 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.153 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:39.153 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.153 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.153 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.153 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.153 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.153 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.153 09:52:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.153 09:52:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.153 09:52:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.153 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.153 "name": "Existed_Raid", 00:14:39.153 "uuid": "9152570d-06e2-45c5-92e0-0519b379b3d5", 00:14:39.153 "strip_size_kb": 64, 00:14:39.153 "state": "online", 00:14:39.153 "raid_level": "raid5f", 00:14:39.153 "superblock": true, 00:14:39.153 "num_base_bdevs": 3, 00:14:39.153 "num_base_bdevs_discovered": 3, 00:14:39.153 "num_base_bdevs_operational": 3, 00:14:39.153 "base_bdevs_list": [ 00:14:39.153 { 00:14:39.153 "name": "BaseBdev1", 00:14:39.153 "uuid": "07d0fd60-bb8c-4329-a186-b227257f817d", 00:14:39.154 "is_configured": true, 00:14:39.154 "data_offset": 2048, 00:14:39.154 "data_size": 63488 00:14:39.154 }, 00:14:39.154 { 00:14:39.154 "name": "BaseBdev2", 00:14:39.154 "uuid": "f20c3164-42fa-4392-9829-e9771b4c8a7a", 00:14:39.154 "is_configured": true, 00:14:39.154 "data_offset": 2048, 00:14:39.154 "data_size": 63488 00:14:39.154 }, 00:14:39.154 { 00:14:39.154 "name": "BaseBdev3", 00:14:39.154 "uuid": "e3882789-34fa-4b3a-9f39-a7bbcd7e00ba", 00:14:39.154 "is_configured": true, 00:14:39.154 "data_offset": 2048, 00:14:39.154 "data_size": 63488 00:14:39.154 } 00:14:39.154 ] 00:14:39.154 }' 00:14:39.154 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.154 09:52:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.722 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:39.722 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:39.722 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:39.722 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:39.722 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:39.722 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:39.722 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:39.722 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:39.722 09:52:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.722 09:52:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.722 [2024-12-06 09:52:04.719774] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:39.722 09:52:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.722 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:39.722 "name": "Existed_Raid", 00:14:39.722 "aliases": [ 00:14:39.722 "9152570d-06e2-45c5-92e0-0519b379b3d5" 00:14:39.722 ], 00:14:39.722 "product_name": "Raid Volume", 00:14:39.722 "block_size": 512, 00:14:39.722 "num_blocks": 126976, 00:14:39.722 "uuid": "9152570d-06e2-45c5-92e0-0519b379b3d5", 00:14:39.722 "assigned_rate_limits": { 00:14:39.722 "rw_ios_per_sec": 0, 00:14:39.722 "rw_mbytes_per_sec": 0, 00:14:39.722 "r_mbytes_per_sec": 0, 00:14:39.722 "w_mbytes_per_sec": 0 00:14:39.722 }, 00:14:39.722 "claimed": false, 00:14:39.722 "zoned": false, 00:14:39.722 "supported_io_types": { 00:14:39.722 "read": true, 00:14:39.722 "write": true, 00:14:39.722 "unmap": false, 00:14:39.722 "flush": false, 00:14:39.722 "reset": true, 00:14:39.722 "nvme_admin": false, 00:14:39.722 "nvme_io": false, 00:14:39.722 "nvme_io_md": false, 00:14:39.722 "write_zeroes": true, 00:14:39.722 "zcopy": false, 00:14:39.722 "get_zone_info": false, 00:14:39.722 "zone_management": false, 00:14:39.722 "zone_append": false, 00:14:39.722 "compare": false, 00:14:39.722 "compare_and_write": false, 00:14:39.722 "abort": false, 00:14:39.722 "seek_hole": false, 00:14:39.722 "seek_data": false, 00:14:39.722 "copy": false, 00:14:39.722 "nvme_iov_md": false 00:14:39.722 }, 00:14:39.722 "driver_specific": { 00:14:39.722 "raid": { 00:14:39.722 "uuid": "9152570d-06e2-45c5-92e0-0519b379b3d5", 00:14:39.722 "strip_size_kb": 64, 00:14:39.722 "state": "online", 00:14:39.722 "raid_level": "raid5f", 00:14:39.722 "superblock": true, 00:14:39.722 "num_base_bdevs": 3, 00:14:39.722 "num_base_bdevs_discovered": 3, 00:14:39.722 "num_base_bdevs_operational": 3, 00:14:39.722 "base_bdevs_list": [ 00:14:39.722 { 00:14:39.722 "name": "BaseBdev1", 00:14:39.722 "uuid": "07d0fd60-bb8c-4329-a186-b227257f817d", 00:14:39.722 "is_configured": true, 00:14:39.722 "data_offset": 2048, 00:14:39.722 "data_size": 63488 00:14:39.722 }, 00:14:39.722 { 00:14:39.722 "name": "BaseBdev2", 00:14:39.722 "uuid": "f20c3164-42fa-4392-9829-e9771b4c8a7a", 00:14:39.722 "is_configured": true, 00:14:39.722 "data_offset": 2048, 00:14:39.722 "data_size": 63488 00:14:39.722 }, 00:14:39.722 { 00:14:39.722 "name": "BaseBdev3", 00:14:39.722 "uuid": "e3882789-34fa-4b3a-9f39-a7bbcd7e00ba", 00:14:39.722 "is_configured": true, 00:14:39.722 "data_offset": 2048, 00:14:39.722 "data_size": 63488 00:14:39.722 } 00:14:39.722 ] 00:14:39.722 } 00:14:39.722 } 00:14:39.722 }' 00:14:39.722 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:39.722 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:39.722 BaseBdev2 00:14:39.722 BaseBdev3' 00:14:39.722 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.722 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:39.722 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.722 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.722 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:39.722 09:52:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.722 09:52:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.722 09:52:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.722 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.722 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.722 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.722 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.722 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:39.722 09:52:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.722 09:52:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.723 09:52:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.723 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.723 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.723 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:39.723 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:39.723 09:52:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.723 09:52:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.723 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:39.723 09:52:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.723 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.723 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.723 09:52:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:39.723 09:52:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.723 09:52:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.723 [2024-12-06 09:52:04.975178] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:40.006 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.006 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:40.006 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:40.006 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:40.006 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:40.006 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:40.006 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:40.006 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:40.006 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.006 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.006 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.006 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:40.006 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.006 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.006 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.006 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.006 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.006 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.006 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.006 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.006 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.006 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.006 "name": "Existed_Raid", 00:14:40.006 "uuid": "9152570d-06e2-45c5-92e0-0519b379b3d5", 00:14:40.006 "strip_size_kb": 64, 00:14:40.006 "state": "online", 00:14:40.006 "raid_level": "raid5f", 00:14:40.006 "superblock": true, 00:14:40.006 "num_base_bdevs": 3, 00:14:40.006 "num_base_bdevs_discovered": 2, 00:14:40.006 "num_base_bdevs_operational": 2, 00:14:40.006 "base_bdevs_list": [ 00:14:40.006 { 00:14:40.006 "name": null, 00:14:40.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.006 "is_configured": false, 00:14:40.006 "data_offset": 0, 00:14:40.006 "data_size": 63488 00:14:40.006 }, 00:14:40.006 { 00:14:40.006 "name": "BaseBdev2", 00:14:40.006 "uuid": "f20c3164-42fa-4392-9829-e9771b4c8a7a", 00:14:40.006 "is_configured": true, 00:14:40.006 "data_offset": 2048, 00:14:40.006 "data_size": 63488 00:14:40.006 }, 00:14:40.006 { 00:14:40.006 "name": "BaseBdev3", 00:14:40.006 "uuid": "e3882789-34fa-4b3a-9f39-a7bbcd7e00ba", 00:14:40.006 "is_configured": true, 00:14:40.006 "data_offset": 2048, 00:14:40.006 "data_size": 63488 00:14:40.006 } 00:14:40.006 ] 00:14:40.006 }' 00:14:40.006 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.006 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.265 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:40.265 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:40.265 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.265 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.265 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:40.265 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.265 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.523 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:40.523 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:40.523 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:40.523 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.523 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.523 [2024-12-06 09:52:05.564353] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:40.523 [2024-12-06 09:52:05.564501] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:40.523 [2024-12-06 09:52:05.659393] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:40.523 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.523 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:40.523 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:40.523 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.523 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:40.523 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.523 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.523 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.523 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:40.523 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:40.523 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:40.523 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.523 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.523 [2024-12-06 09:52:05.715322] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:40.523 [2024-12-06 09:52:05.715405] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:40.782 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.782 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:40.782 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.783 BaseBdev2 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.783 [ 00:14:40.783 { 00:14:40.783 "name": "BaseBdev2", 00:14:40.783 "aliases": [ 00:14:40.783 "cbc55a56-0602-47ad-afad-960b239e10d1" 00:14:40.783 ], 00:14:40.783 "product_name": "Malloc disk", 00:14:40.783 "block_size": 512, 00:14:40.783 "num_blocks": 65536, 00:14:40.783 "uuid": "cbc55a56-0602-47ad-afad-960b239e10d1", 00:14:40.783 "assigned_rate_limits": { 00:14:40.783 "rw_ios_per_sec": 0, 00:14:40.783 "rw_mbytes_per_sec": 0, 00:14:40.783 "r_mbytes_per_sec": 0, 00:14:40.783 "w_mbytes_per_sec": 0 00:14:40.783 }, 00:14:40.783 "claimed": false, 00:14:40.783 "zoned": false, 00:14:40.783 "supported_io_types": { 00:14:40.783 "read": true, 00:14:40.783 "write": true, 00:14:40.783 "unmap": true, 00:14:40.783 "flush": true, 00:14:40.783 "reset": true, 00:14:40.783 "nvme_admin": false, 00:14:40.783 "nvme_io": false, 00:14:40.783 "nvme_io_md": false, 00:14:40.783 "write_zeroes": true, 00:14:40.783 "zcopy": true, 00:14:40.783 "get_zone_info": false, 00:14:40.783 "zone_management": false, 00:14:40.783 "zone_append": false, 00:14:40.783 "compare": false, 00:14:40.783 "compare_and_write": false, 00:14:40.783 "abort": true, 00:14:40.783 "seek_hole": false, 00:14:40.783 "seek_data": false, 00:14:40.783 "copy": true, 00:14:40.783 "nvme_iov_md": false 00:14:40.783 }, 00:14:40.783 "memory_domains": [ 00:14:40.783 { 00:14:40.783 "dma_device_id": "system", 00:14:40.783 "dma_device_type": 1 00:14:40.783 }, 00:14:40.783 { 00:14:40.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.783 "dma_device_type": 2 00:14:40.783 } 00:14:40.783 ], 00:14:40.783 "driver_specific": {} 00:14:40.783 } 00:14:40.783 ] 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.783 BaseBdev3 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.783 09:52:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.783 [ 00:14:40.783 { 00:14:40.783 "name": "BaseBdev3", 00:14:40.783 "aliases": [ 00:14:40.783 "155a69f5-30b4-4464-9046-0c057ce9e8a6" 00:14:40.783 ], 00:14:40.783 "product_name": "Malloc disk", 00:14:40.783 "block_size": 512, 00:14:40.783 "num_blocks": 65536, 00:14:40.783 "uuid": "155a69f5-30b4-4464-9046-0c057ce9e8a6", 00:14:40.783 "assigned_rate_limits": { 00:14:40.783 "rw_ios_per_sec": 0, 00:14:40.783 "rw_mbytes_per_sec": 0, 00:14:40.783 "r_mbytes_per_sec": 0, 00:14:40.783 "w_mbytes_per_sec": 0 00:14:40.783 }, 00:14:40.783 "claimed": false, 00:14:40.783 "zoned": false, 00:14:40.783 "supported_io_types": { 00:14:40.783 "read": true, 00:14:40.783 "write": true, 00:14:40.783 "unmap": true, 00:14:40.783 "flush": true, 00:14:40.783 "reset": true, 00:14:40.783 "nvme_admin": false, 00:14:40.783 "nvme_io": false, 00:14:40.783 "nvme_io_md": false, 00:14:40.783 "write_zeroes": true, 00:14:40.783 "zcopy": true, 00:14:40.783 "get_zone_info": false, 00:14:40.783 "zone_management": false, 00:14:40.783 "zone_append": false, 00:14:40.783 "compare": false, 00:14:40.783 "compare_and_write": false, 00:14:40.783 "abort": true, 00:14:40.783 "seek_hole": false, 00:14:40.783 "seek_data": false, 00:14:40.783 "copy": true, 00:14:40.783 "nvme_iov_md": false 00:14:40.783 }, 00:14:40.783 "memory_domains": [ 00:14:40.783 { 00:14:40.783 "dma_device_id": "system", 00:14:40.783 "dma_device_type": 1 00:14:40.783 }, 00:14:40.783 { 00:14:40.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.783 "dma_device_type": 2 00:14:40.783 } 00:14:40.783 ], 00:14:40.783 "driver_specific": {} 00:14:40.783 } 00:14:40.783 ] 00:14:40.783 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.783 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:40.783 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:40.783 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:40.783 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:40.783 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.783 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.783 [2024-12-06 09:52:06.025927] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:40.783 [2024-12-06 09:52:06.026011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:40.783 [2024-12-06 09:52:06.026052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:40.783 [2024-12-06 09:52:06.027835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:40.783 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.783 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:40.783 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:40.783 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.783 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.783 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.784 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:40.784 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.784 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.784 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.784 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.784 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.784 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.784 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.784 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.042 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.042 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.042 "name": "Existed_Raid", 00:14:41.042 "uuid": "9eac14f1-9a1a-4917-a8c3-01d10ac8c95b", 00:14:41.042 "strip_size_kb": 64, 00:14:41.042 "state": "configuring", 00:14:41.042 "raid_level": "raid5f", 00:14:41.042 "superblock": true, 00:14:41.042 "num_base_bdevs": 3, 00:14:41.042 "num_base_bdevs_discovered": 2, 00:14:41.042 "num_base_bdevs_operational": 3, 00:14:41.042 "base_bdevs_list": [ 00:14:41.042 { 00:14:41.042 "name": "BaseBdev1", 00:14:41.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.042 "is_configured": false, 00:14:41.042 "data_offset": 0, 00:14:41.042 "data_size": 0 00:14:41.042 }, 00:14:41.042 { 00:14:41.042 "name": "BaseBdev2", 00:14:41.042 "uuid": "cbc55a56-0602-47ad-afad-960b239e10d1", 00:14:41.042 "is_configured": true, 00:14:41.042 "data_offset": 2048, 00:14:41.042 "data_size": 63488 00:14:41.042 }, 00:14:41.042 { 00:14:41.042 "name": "BaseBdev3", 00:14:41.042 "uuid": "155a69f5-30b4-4464-9046-0c057ce9e8a6", 00:14:41.042 "is_configured": true, 00:14:41.042 "data_offset": 2048, 00:14:41.042 "data_size": 63488 00:14:41.042 } 00:14:41.042 ] 00:14:41.042 }' 00:14:41.042 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.042 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.301 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:41.301 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.301 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.301 [2024-12-06 09:52:06.481238] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:41.301 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.301 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:41.301 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.301 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.301 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.301 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.301 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.301 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.301 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.301 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.301 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.301 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.301 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.301 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.301 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.301 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.301 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.301 "name": "Existed_Raid", 00:14:41.301 "uuid": "9eac14f1-9a1a-4917-a8c3-01d10ac8c95b", 00:14:41.301 "strip_size_kb": 64, 00:14:41.301 "state": "configuring", 00:14:41.301 "raid_level": "raid5f", 00:14:41.301 "superblock": true, 00:14:41.301 "num_base_bdevs": 3, 00:14:41.301 "num_base_bdevs_discovered": 1, 00:14:41.301 "num_base_bdevs_operational": 3, 00:14:41.301 "base_bdevs_list": [ 00:14:41.301 { 00:14:41.301 "name": "BaseBdev1", 00:14:41.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.301 "is_configured": false, 00:14:41.301 "data_offset": 0, 00:14:41.301 "data_size": 0 00:14:41.301 }, 00:14:41.301 { 00:14:41.301 "name": null, 00:14:41.301 "uuid": "cbc55a56-0602-47ad-afad-960b239e10d1", 00:14:41.301 "is_configured": false, 00:14:41.301 "data_offset": 0, 00:14:41.301 "data_size": 63488 00:14:41.301 }, 00:14:41.301 { 00:14:41.301 "name": "BaseBdev3", 00:14:41.301 "uuid": "155a69f5-30b4-4464-9046-0c057ce9e8a6", 00:14:41.301 "is_configured": true, 00:14:41.301 "data_offset": 2048, 00:14:41.301 "data_size": 63488 00:14:41.301 } 00:14:41.301 ] 00:14:41.301 }' 00:14:41.301 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.301 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.872 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.872 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.872 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.872 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:41.872 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.872 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:41.872 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:41.872 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.872 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.872 [2024-12-06 09:52:06.953421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:41.872 BaseBdev1 00:14:41.872 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.872 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:41.872 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:41.872 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:41.872 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:41.872 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:41.872 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:41.872 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:41.872 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.872 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.872 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.872 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:41.872 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.872 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.872 [ 00:14:41.872 { 00:14:41.872 "name": "BaseBdev1", 00:14:41.872 "aliases": [ 00:14:41.872 "319d63a6-0932-476d-82fc-7bcbff91fbf8" 00:14:41.872 ], 00:14:41.872 "product_name": "Malloc disk", 00:14:41.872 "block_size": 512, 00:14:41.872 "num_blocks": 65536, 00:14:41.872 "uuid": "319d63a6-0932-476d-82fc-7bcbff91fbf8", 00:14:41.872 "assigned_rate_limits": { 00:14:41.872 "rw_ios_per_sec": 0, 00:14:41.872 "rw_mbytes_per_sec": 0, 00:14:41.872 "r_mbytes_per_sec": 0, 00:14:41.872 "w_mbytes_per_sec": 0 00:14:41.872 }, 00:14:41.872 "claimed": true, 00:14:41.872 "claim_type": "exclusive_write", 00:14:41.872 "zoned": false, 00:14:41.872 "supported_io_types": { 00:14:41.872 "read": true, 00:14:41.872 "write": true, 00:14:41.872 "unmap": true, 00:14:41.872 "flush": true, 00:14:41.872 "reset": true, 00:14:41.872 "nvme_admin": false, 00:14:41.872 "nvme_io": false, 00:14:41.872 "nvme_io_md": false, 00:14:41.872 "write_zeroes": true, 00:14:41.872 "zcopy": true, 00:14:41.872 "get_zone_info": false, 00:14:41.872 "zone_management": false, 00:14:41.872 "zone_append": false, 00:14:41.872 "compare": false, 00:14:41.872 "compare_and_write": false, 00:14:41.872 "abort": true, 00:14:41.873 "seek_hole": false, 00:14:41.873 "seek_data": false, 00:14:41.873 "copy": true, 00:14:41.873 "nvme_iov_md": false 00:14:41.873 }, 00:14:41.873 "memory_domains": [ 00:14:41.873 { 00:14:41.873 "dma_device_id": "system", 00:14:41.873 "dma_device_type": 1 00:14:41.873 }, 00:14:41.873 { 00:14:41.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.873 "dma_device_type": 2 00:14:41.873 } 00:14:41.873 ], 00:14:41.873 "driver_specific": {} 00:14:41.873 } 00:14:41.873 ] 00:14:41.873 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.873 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:41.873 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:41.873 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:41.873 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.873 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.873 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.873 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.873 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.873 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.873 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.873 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.873 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.873 09:52:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.873 09:52:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.873 09:52:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.873 09:52:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.873 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.873 "name": "Existed_Raid", 00:14:41.873 "uuid": "9eac14f1-9a1a-4917-a8c3-01d10ac8c95b", 00:14:41.873 "strip_size_kb": 64, 00:14:41.873 "state": "configuring", 00:14:41.873 "raid_level": "raid5f", 00:14:41.873 "superblock": true, 00:14:41.873 "num_base_bdevs": 3, 00:14:41.873 "num_base_bdevs_discovered": 2, 00:14:41.873 "num_base_bdevs_operational": 3, 00:14:41.873 "base_bdevs_list": [ 00:14:41.873 { 00:14:41.873 "name": "BaseBdev1", 00:14:41.873 "uuid": "319d63a6-0932-476d-82fc-7bcbff91fbf8", 00:14:41.873 "is_configured": true, 00:14:41.873 "data_offset": 2048, 00:14:41.873 "data_size": 63488 00:14:41.873 }, 00:14:41.873 { 00:14:41.873 "name": null, 00:14:41.873 "uuid": "cbc55a56-0602-47ad-afad-960b239e10d1", 00:14:41.873 "is_configured": false, 00:14:41.873 "data_offset": 0, 00:14:41.873 "data_size": 63488 00:14:41.873 }, 00:14:41.873 { 00:14:41.873 "name": "BaseBdev3", 00:14:41.873 "uuid": "155a69f5-30b4-4464-9046-0c057ce9e8a6", 00:14:41.873 "is_configured": true, 00:14:41.873 "data_offset": 2048, 00:14:41.873 "data_size": 63488 00:14:41.873 } 00:14:41.873 ] 00:14:41.873 }' 00:14:41.873 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.873 09:52:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.441 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:42.441 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.441 09:52:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.441 09:52:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.441 09:52:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.441 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:42.441 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:42.441 09:52:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.441 09:52:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.441 [2024-12-06 09:52:07.456632] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:42.441 09:52:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.441 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:42.441 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.441 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.441 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.441 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.441 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.441 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.441 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.441 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.441 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.441 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.441 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.441 09:52:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.441 09:52:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.441 09:52:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.441 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.441 "name": "Existed_Raid", 00:14:42.441 "uuid": "9eac14f1-9a1a-4917-a8c3-01d10ac8c95b", 00:14:42.441 "strip_size_kb": 64, 00:14:42.441 "state": "configuring", 00:14:42.441 "raid_level": "raid5f", 00:14:42.441 "superblock": true, 00:14:42.441 "num_base_bdevs": 3, 00:14:42.441 "num_base_bdevs_discovered": 1, 00:14:42.441 "num_base_bdevs_operational": 3, 00:14:42.441 "base_bdevs_list": [ 00:14:42.441 { 00:14:42.441 "name": "BaseBdev1", 00:14:42.441 "uuid": "319d63a6-0932-476d-82fc-7bcbff91fbf8", 00:14:42.441 "is_configured": true, 00:14:42.441 "data_offset": 2048, 00:14:42.441 "data_size": 63488 00:14:42.441 }, 00:14:42.441 { 00:14:42.441 "name": null, 00:14:42.441 "uuid": "cbc55a56-0602-47ad-afad-960b239e10d1", 00:14:42.441 "is_configured": false, 00:14:42.441 "data_offset": 0, 00:14:42.441 "data_size": 63488 00:14:42.441 }, 00:14:42.441 { 00:14:42.441 "name": null, 00:14:42.441 "uuid": "155a69f5-30b4-4464-9046-0c057ce9e8a6", 00:14:42.441 "is_configured": false, 00:14:42.441 "data_offset": 0, 00:14:42.441 "data_size": 63488 00:14:42.441 } 00:14:42.441 ] 00:14:42.441 }' 00:14:42.441 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.441 09:52:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.701 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:42.701 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.701 09:52:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.701 09:52:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.701 09:52:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.701 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:42.701 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:42.701 09:52:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.701 09:52:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.961 [2024-12-06 09:52:07.971930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:42.961 09:52:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.961 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:42.961 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.961 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.961 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.961 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.961 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.961 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.961 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.961 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.961 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.961 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.961 09:52:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.961 09:52:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.961 09:52:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.961 09:52:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.961 09:52:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.961 "name": "Existed_Raid", 00:14:42.961 "uuid": "9eac14f1-9a1a-4917-a8c3-01d10ac8c95b", 00:14:42.961 "strip_size_kb": 64, 00:14:42.961 "state": "configuring", 00:14:42.961 "raid_level": "raid5f", 00:14:42.961 "superblock": true, 00:14:42.961 "num_base_bdevs": 3, 00:14:42.961 "num_base_bdevs_discovered": 2, 00:14:42.961 "num_base_bdevs_operational": 3, 00:14:42.961 "base_bdevs_list": [ 00:14:42.961 { 00:14:42.961 "name": "BaseBdev1", 00:14:42.961 "uuid": "319d63a6-0932-476d-82fc-7bcbff91fbf8", 00:14:42.961 "is_configured": true, 00:14:42.961 "data_offset": 2048, 00:14:42.961 "data_size": 63488 00:14:42.961 }, 00:14:42.961 { 00:14:42.961 "name": null, 00:14:42.961 "uuid": "cbc55a56-0602-47ad-afad-960b239e10d1", 00:14:42.961 "is_configured": false, 00:14:42.961 "data_offset": 0, 00:14:42.961 "data_size": 63488 00:14:42.961 }, 00:14:42.961 { 00:14:42.961 "name": "BaseBdev3", 00:14:42.961 "uuid": "155a69f5-30b4-4464-9046-0c057ce9e8a6", 00:14:42.961 "is_configured": true, 00:14:42.961 "data_offset": 2048, 00:14:42.961 "data_size": 63488 00:14:42.961 } 00:14:42.961 ] 00:14:42.961 }' 00:14:42.961 09:52:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.961 09:52:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.221 09:52:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.221 09:52:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:43.221 09:52:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.221 09:52:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.221 09:52:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.221 09:52:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:43.221 09:52:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:43.221 09:52:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.221 09:52:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.221 [2024-12-06 09:52:08.479096] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:43.482 09:52:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.482 09:52:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:43.482 09:52:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.482 09:52:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:43.482 09:52:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:43.482 09:52:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.482 09:52:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:43.482 09:52:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.482 09:52:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.482 09:52:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.482 09:52:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.482 09:52:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.482 09:52:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.482 09:52:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.482 09:52:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.482 09:52:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.482 09:52:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.482 "name": "Existed_Raid", 00:14:43.482 "uuid": "9eac14f1-9a1a-4917-a8c3-01d10ac8c95b", 00:14:43.482 "strip_size_kb": 64, 00:14:43.482 "state": "configuring", 00:14:43.482 "raid_level": "raid5f", 00:14:43.482 "superblock": true, 00:14:43.483 "num_base_bdevs": 3, 00:14:43.483 "num_base_bdevs_discovered": 1, 00:14:43.483 "num_base_bdevs_operational": 3, 00:14:43.483 "base_bdevs_list": [ 00:14:43.483 { 00:14:43.483 "name": null, 00:14:43.483 "uuid": "319d63a6-0932-476d-82fc-7bcbff91fbf8", 00:14:43.483 "is_configured": false, 00:14:43.483 "data_offset": 0, 00:14:43.483 "data_size": 63488 00:14:43.483 }, 00:14:43.483 { 00:14:43.483 "name": null, 00:14:43.483 "uuid": "cbc55a56-0602-47ad-afad-960b239e10d1", 00:14:43.483 "is_configured": false, 00:14:43.483 "data_offset": 0, 00:14:43.483 "data_size": 63488 00:14:43.483 }, 00:14:43.483 { 00:14:43.483 "name": "BaseBdev3", 00:14:43.483 "uuid": "155a69f5-30b4-4464-9046-0c057ce9e8a6", 00:14:43.483 "is_configured": true, 00:14:43.483 "data_offset": 2048, 00:14:43.483 "data_size": 63488 00:14:43.483 } 00:14:43.483 ] 00:14:43.483 }' 00:14:43.483 09:52:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.483 09:52:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.053 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.053 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.053 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.053 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:44.053 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.053 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:44.053 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:44.053 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.053 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.053 [2024-12-06 09:52:09.071641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:44.053 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.053 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:44.053 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.053 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.053 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.053 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.053 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.053 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.053 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.053 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.053 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.053 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.053 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.053 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.053 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.053 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.053 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.053 "name": "Existed_Raid", 00:14:44.053 "uuid": "9eac14f1-9a1a-4917-a8c3-01d10ac8c95b", 00:14:44.053 "strip_size_kb": 64, 00:14:44.053 "state": "configuring", 00:14:44.053 "raid_level": "raid5f", 00:14:44.053 "superblock": true, 00:14:44.053 "num_base_bdevs": 3, 00:14:44.053 "num_base_bdevs_discovered": 2, 00:14:44.053 "num_base_bdevs_operational": 3, 00:14:44.053 "base_bdevs_list": [ 00:14:44.053 { 00:14:44.053 "name": null, 00:14:44.053 "uuid": "319d63a6-0932-476d-82fc-7bcbff91fbf8", 00:14:44.053 "is_configured": false, 00:14:44.053 "data_offset": 0, 00:14:44.053 "data_size": 63488 00:14:44.053 }, 00:14:44.053 { 00:14:44.053 "name": "BaseBdev2", 00:14:44.053 "uuid": "cbc55a56-0602-47ad-afad-960b239e10d1", 00:14:44.053 "is_configured": true, 00:14:44.053 "data_offset": 2048, 00:14:44.053 "data_size": 63488 00:14:44.053 }, 00:14:44.053 { 00:14:44.053 "name": "BaseBdev3", 00:14:44.053 "uuid": "155a69f5-30b4-4464-9046-0c057ce9e8a6", 00:14:44.053 "is_configured": true, 00:14:44.053 "data_offset": 2048, 00:14:44.053 "data_size": 63488 00:14:44.053 } 00:14:44.053 ] 00:14:44.053 }' 00:14:44.053 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.053 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.313 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.313 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.313 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:44.313 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.313 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.313 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:44.313 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.313 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:44.313 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.313 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.313 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.313 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 319d63a6-0932-476d-82fc-7bcbff91fbf8 00:14:44.313 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.313 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.313 [2024-12-06 09:52:09.577749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:44.313 [2024-12-06 09:52:09.578081] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:44.313 [2024-12-06 09:52:09.578136] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:44.313 [2024-12-06 09:52:09.578450] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:44.313 NewBaseBdev 00:14:44.313 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.313 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:44.313 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:44.313 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:44.313 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:44.313 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:44.313 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:44.313 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:44.313 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.313 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.313 [2024-12-06 09:52:09.583854] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:44.313 [2024-12-06 09:52:09.583911] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:44.313 [2024-12-06 09:52:09.584087] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.574 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.574 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:44.574 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.574 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.574 [ 00:14:44.574 { 00:14:44.574 "name": "NewBaseBdev", 00:14:44.574 "aliases": [ 00:14:44.574 "319d63a6-0932-476d-82fc-7bcbff91fbf8" 00:14:44.574 ], 00:14:44.574 "product_name": "Malloc disk", 00:14:44.574 "block_size": 512, 00:14:44.574 "num_blocks": 65536, 00:14:44.574 "uuid": "319d63a6-0932-476d-82fc-7bcbff91fbf8", 00:14:44.574 "assigned_rate_limits": { 00:14:44.574 "rw_ios_per_sec": 0, 00:14:44.574 "rw_mbytes_per_sec": 0, 00:14:44.574 "r_mbytes_per_sec": 0, 00:14:44.574 "w_mbytes_per_sec": 0 00:14:44.574 }, 00:14:44.574 "claimed": true, 00:14:44.574 "claim_type": "exclusive_write", 00:14:44.574 "zoned": false, 00:14:44.574 "supported_io_types": { 00:14:44.574 "read": true, 00:14:44.574 "write": true, 00:14:44.574 "unmap": true, 00:14:44.574 "flush": true, 00:14:44.574 "reset": true, 00:14:44.574 "nvme_admin": false, 00:14:44.574 "nvme_io": false, 00:14:44.574 "nvme_io_md": false, 00:14:44.574 "write_zeroes": true, 00:14:44.574 "zcopy": true, 00:14:44.574 "get_zone_info": false, 00:14:44.574 "zone_management": false, 00:14:44.574 "zone_append": false, 00:14:44.574 "compare": false, 00:14:44.574 "compare_and_write": false, 00:14:44.574 "abort": true, 00:14:44.574 "seek_hole": false, 00:14:44.574 "seek_data": false, 00:14:44.574 "copy": true, 00:14:44.574 "nvme_iov_md": false 00:14:44.574 }, 00:14:44.574 "memory_domains": [ 00:14:44.574 { 00:14:44.574 "dma_device_id": "system", 00:14:44.574 "dma_device_type": 1 00:14:44.574 }, 00:14:44.574 { 00:14:44.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.574 "dma_device_type": 2 00:14:44.574 } 00:14:44.574 ], 00:14:44.574 "driver_specific": {} 00:14:44.574 } 00:14:44.574 ] 00:14:44.574 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.574 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:44.574 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:44.574 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.574 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.574 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.574 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.574 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.574 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.574 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.574 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.574 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.574 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.574 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.574 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.574 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.574 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.574 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.574 "name": "Existed_Raid", 00:14:44.574 "uuid": "9eac14f1-9a1a-4917-a8c3-01d10ac8c95b", 00:14:44.574 "strip_size_kb": 64, 00:14:44.574 "state": "online", 00:14:44.574 "raid_level": "raid5f", 00:14:44.574 "superblock": true, 00:14:44.574 "num_base_bdevs": 3, 00:14:44.574 "num_base_bdevs_discovered": 3, 00:14:44.574 "num_base_bdevs_operational": 3, 00:14:44.574 "base_bdevs_list": [ 00:14:44.574 { 00:14:44.574 "name": "NewBaseBdev", 00:14:44.574 "uuid": "319d63a6-0932-476d-82fc-7bcbff91fbf8", 00:14:44.574 "is_configured": true, 00:14:44.574 "data_offset": 2048, 00:14:44.574 "data_size": 63488 00:14:44.574 }, 00:14:44.574 { 00:14:44.574 "name": "BaseBdev2", 00:14:44.574 "uuid": "cbc55a56-0602-47ad-afad-960b239e10d1", 00:14:44.574 "is_configured": true, 00:14:44.574 "data_offset": 2048, 00:14:44.574 "data_size": 63488 00:14:44.574 }, 00:14:44.574 { 00:14:44.574 "name": "BaseBdev3", 00:14:44.574 "uuid": "155a69f5-30b4-4464-9046-0c057ce9e8a6", 00:14:44.574 "is_configured": true, 00:14:44.574 "data_offset": 2048, 00:14:44.574 "data_size": 63488 00:14:44.574 } 00:14:44.574 ] 00:14:44.574 }' 00:14:44.574 09:52:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.574 09:52:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.834 09:52:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:44.834 09:52:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:44.834 09:52:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:44.834 09:52:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:44.834 09:52:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:44.834 09:52:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:44.834 09:52:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:44.834 09:52:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:44.834 09:52:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.834 09:52:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.834 [2024-12-06 09:52:10.097481] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:45.094 09:52:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.094 09:52:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:45.094 "name": "Existed_Raid", 00:14:45.094 "aliases": [ 00:14:45.094 "9eac14f1-9a1a-4917-a8c3-01d10ac8c95b" 00:14:45.094 ], 00:14:45.094 "product_name": "Raid Volume", 00:14:45.094 "block_size": 512, 00:14:45.094 "num_blocks": 126976, 00:14:45.094 "uuid": "9eac14f1-9a1a-4917-a8c3-01d10ac8c95b", 00:14:45.094 "assigned_rate_limits": { 00:14:45.094 "rw_ios_per_sec": 0, 00:14:45.094 "rw_mbytes_per_sec": 0, 00:14:45.094 "r_mbytes_per_sec": 0, 00:14:45.094 "w_mbytes_per_sec": 0 00:14:45.094 }, 00:14:45.094 "claimed": false, 00:14:45.094 "zoned": false, 00:14:45.094 "supported_io_types": { 00:14:45.094 "read": true, 00:14:45.094 "write": true, 00:14:45.094 "unmap": false, 00:14:45.094 "flush": false, 00:14:45.094 "reset": true, 00:14:45.094 "nvme_admin": false, 00:14:45.094 "nvme_io": false, 00:14:45.094 "nvme_io_md": false, 00:14:45.094 "write_zeroes": true, 00:14:45.094 "zcopy": false, 00:14:45.094 "get_zone_info": false, 00:14:45.094 "zone_management": false, 00:14:45.094 "zone_append": false, 00:14:45.094 "compare": false, 00:14:45.094 "compare_and_write": false, 00:14:45.094 "abort": false, 00:14:45.094 "seek_hole": false, 00:14:45.094 "seek_data": false, 00:14:45.095 "copy": false, 00:14:45.095 "nvme_iov_md": false 00:14:45.095 }, 00:14:45.095 "driver_specific": { 00:14:45.095 "raid": { 00:14:45.095 "uuid": "9eac14f1-9a1a-4917-a8c3-01d10ac8c95b", 00:14:45.095 "strip_size_kb": 64, 00:14:45.095 "state": "online", 00:14:45.095 "raid_level": "raid5f", 00:14:45.095 "superblock": true, 00:14:45.095 "num_base_bdevs": 3, 00:14:45.095 "num_base_bdevs_discovered": 3, 00:14:45.095 "num_base_bdevs_operational": 3, 00:14:45.095 "base_bdevs_list": [ 00:14:45.095 { 00:14:45.095 "name": "NewBaseBdev", 00:14:45.095 "uuid": "319d63a6-0932-476d-82fc-7bcbff91fbf8", 00:14:45.095 "is_configured": true, 00:14:45.095 "data_offset": 2048, 00:14:45.095 "data_size": 63488 00:14:45.095 }, 00:14:45.095 { 00:14:45.095 "name": "BaseBdev2", 00:14:45.095 "uuid": "cbc55a56-0602-47ad-afad-960b239e10d1", 00:14:45.095 "is_configured": true, 00:14:45.095 "data_offset": 2048, 00:14:45.095 "data_size": 63488 00:14:45.095 }, 00:14:45.095 { 00:14:45.095 "name": "BaseBdev3", 00:14:45.095 "uuid": "155a69f5-30b4-4464-9046-0c057ce9e8a6", 00:14:45.095 "is_configured": true, 00:14:45.095 "data_offset": 2048, 00:14:45.095 "data_size": 63488 00:14:45.095 } 00:14:45.095 ] 00:14:45.095 } 00:14:45.095 } 00:14:45.095 }' 00:14:45.095 09:52:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:45.095 09:52:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:45.095 BaseBdev2 00:14:45.095 BaseBdev3' 00:14:45.095 09:52:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.095 09:52:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:45.095 09:52:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.095 09:52:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:45.095 09:52:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.095 09:52:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.095 09:52:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.095 09:52:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.095 09:52:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.095 09:52:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.095 09:52:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.095 09:52:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.095 09:52:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:45.095 09:52:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.095 09:52:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.095 09:52:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.095 09:52:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.095 09:52:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.095 09:52:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.095 09:52:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:45.095 09:52:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.095 09:52:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.095 09:52:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.095 09:52:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.095 09:52:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.095 09:52:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.095 09:52:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:45.095 09:52:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.095 09:52:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.095 [2024-12-06 09:52:10.360900] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:45.095 [2024-12-06 09:52:10.360987] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:45.095 [2024-12-06 09:52:10.361092] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:45.095 [2024-12-06 09:52:10.361374] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:45.095 [2024-12-06 09:52:10.361389] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:45.355 09:52:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.355 09:52:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80387 00:14:45.355 09:52:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80387 ']' 00:14:45.355 09:52:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80387 00:14:45.355 09:52:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:45.355 09:52:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:45.355 09:52:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80387 00:14:45.355 09:52:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:45.355 09:52:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:45.355 killing process with pid 80387 00:14:45.355 09:52:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80387' 00:14:45.355 09:52:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80387 00:14:45.355 [2024-12-06 09:52:10.398428] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:45.355 09:52:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80387 00:14:45.614 [2024-12-06 09:52:10.698790] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:46.558 09:52:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:46.558 00:14:46.558 real 0m10.681s 00:14:46.558 user 0m16.928s 00:14:46.558 sys 0m1.977s 00:14:46.558 09:52:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:46.558 ************************************ 00:14:46.558 END TEST raid5f_state_function_test_sb 00:14:46.558 ************************************ 00:14:46.558 09:52:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.817 09:52:11 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:14:46.817 09:52:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:46.817 09:52:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:46.817 09:52:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:46.817 ************************************ 00:14:46.817 START TEST raid5f_superblock_test 00:14:46.817 ************************************ 00:14:46.817 09:52:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:14:46.817 09:52:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:46.817 09:52:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:46.817 09:52:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:46.817 09:52:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:46.817 09:52:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:46.817 09:52:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:46.817 09:52:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:46.817 09:52:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:46.817 09:52:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:46.817 09:52:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:46.817 09:52:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:46.817 09:52:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:46.817 09:52:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:46.817 09:52:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:46.817 09:52:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:46.817 09:52:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:46.817 09:52:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81002 00:14:46.817 09:52:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:46.817 09:52:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81002 00:14:46.818 09:52:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81002 ']' 00:14:46.818 09:52:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.818 09:52:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:46.818 09:52:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.818 09:52:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:46.818 09:52:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.818 [2024-12-06 09:52:11.995863] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:14:46.818 [2024-12-06 09:52:11.996122] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81002 ] 00:14:47.077 [2024-12-06 09:52:12.168989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.077 [2024-12-06 09:52:12.283569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.336 [2024-12-06 09:52:12.486660] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:47.336 [2024-12-06 09:52:12.486720] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:47.596 09:52:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:47.596 09:52:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:47.596 09:52:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:47.596 09:52:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:47.596 09:52:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:47.596 09:52:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:47.596 09:52:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:47.596 09:52:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:47.596 09:52:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:47.596 09:52:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:47.596 09:52:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:47.596 09:52:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.596 09:52:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.596 malloc1 00:14:47.596 09:52:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.596 09:52:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:47.596 09:52:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.596 09:52:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.596 [2024-12-06 09:52:12.866903] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:47.596 [2024-12-06 09:52:12.867011] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.596 [2024-12-06 09:52:12.867051] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:47.596 [2024-12-06 09:52:12.867079] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.856 [2024-12-06 09:52:12.869137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.856 [2024-12-06 09:52:12.869243] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:47.856 pt1 00:14:47.856 09:52:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.856 09:52:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:47.856 09:52:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:47.857 09:52:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:47.857 09:52:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:47.857 09:52:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:47.857 09:52:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:47.857 09:52:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:47.857 09:52:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:47.857 09:52:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:47.857 09:52:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.857 09:52:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.857 malloc2 00:14:47.857 09:52:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.857 09:52:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:47.857 09:52:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.857 09:52:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.857 [2024-12-06 09:52:12.926255] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:47.857 [2024-12-06 09:52:12.926347] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.857 [2024-12-06 09:52:12.926376] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:47.857 [2024-12-06 09:52:12.926385] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.857 [2024-12-06 09:52:12.928445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.857 [2024-12-06 09:52:12.928484] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:47.857 pt2 00:14:47.857 09:52:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.857 09:52:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:47.857 09:52:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:47.857 09:52:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:47.857 09:52:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:47.857 09:52:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:47.857 09:52:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:47.857 09:52:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:47.857 09:52:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:47.857 09:52:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:47.857 09:52:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.857 09:52:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.857 malloc3 00:14:47.857 09:52:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.857 09:52:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:47.857 09:52:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.857 09:52:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.857 [2024-12-06 09:52:12.996230] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:47.857 [2024-12-06 09:52:12.996316] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.857 [2024-12-06 09:52:12.996355] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:47.857 [2024-12-06 09:52:12.996381] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.857 [2024-12-06 09:52:12.998379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.857 [2024-12-06 09:52:12.998448] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:47.857 pt3 00:14:47.857 09:52:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.857 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:47.857 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:47.857 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:47.857 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.857 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.857 [2024-12-06 09:52:13.008257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:47.857 [2024-12-06 09:52:13.009983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:47.857 [2024-12-06 09:52:13.010086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:47.857 [2024-12-06 09:52:13.010286] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:47.857 [2024-12-06 09:52:13.010340] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:47.857 [2024-12-06 09:52:13.010592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:47.857 [2024-12-06 09:52:13.015962] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:47.857 [2024-12-06 09:52:13.016014] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:47.857 [2024-12-06 09:52:13.016227] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.857 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.857 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:47.857 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:47.857 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.857 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.857 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.857 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.857 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.857 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.857 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.857 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.857 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.857 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.857 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.857 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.857 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.857 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.857 "name": "raid_bdev1", 00:14:47.857 "uuid": "825ac322-9008-4b1d-9860-0718f73c5a68", 00:14:47.857 "strip_size_kb": 64, 00:14:47.857 "state": "online", 00:14:47.857 "raid_level": "raid5f", 00:14:47.857 "superblock": true, 00:14:47.857 "num_base_bdevs": 3, 00:14:47.857 "num_base_bdevs_discovered": 3, 00:14:47.857 "num_base_bdevs_operational": 3, 00:14:47.857 "base_bdevs_list": [ 00:14:47.857 { 00:14:47.857 "name": "pt1", 00:14:47.857 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:47.857 "is_configured": true, 00:14:47.857 "data_offset": 2048, 00:14:47.857 "data_size": 63488 00:14:47.857 }, 00:14:47.857 { 00:14:47.857 "name": "pt2", 00:14:47.857 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:47.857 "is_configured": true, 00:14:47.857 "data_offset": 2048, 00:14:47.857 "data_size": 63488 00:14:47.857 }, 00:14:47.857 { 00:14:47.857 "name": "pt3", 00:14:47.857 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:47.857 "is_configured": true, 00:14:47.857 "data_offset": 2048, 00:14:47.857 "data_size": 63488 00:14:47.857 } 00:14:47.857 ] 00:14:47.857 }' 00:14:47.857 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.857 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.428 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:48.428 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:48.428 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:48.428 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:48.428 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:48.428 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:48.428 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:48.428 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:48.428 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.428 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.428 [2024-12-06 09:52:13.509812] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:48.428 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.428 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:48.428 "name": "raid_bdev1", 00:14:48.428 "aliases": [ 00:14:48.428 "825ac322-9008-4b1d-9860-0718f73c5a68" 00:14:48.428 ], 00:14:48.428 "product_name": "Raid Volume", 00:14:48.428 "block_size": 512, 00:14:48.428 "num_blocks": 126976, 00:14:48.428 "uuid": "825ac322-9008-4b1d-9860-0718f73c5a68", 00:14:48.428 "assigned_rate_limits": { 00:14:48.428 "rw_ios_per_sec": 0, 00:14:48.428 "rw_mbytes_per_sec": 0, 00:14:48.428 "r_mbytes_per_sec": 0, 00:14:48.428 "w_mbytes_per_sec": 0 00:14:48.428 }, 00:14:48.428 "claimed": false, 00:14:48.428 "zoned": false, 00:14:48.428 "supported_io_types": { 00:14:48.428 "read": true, 00:14:48.428 "write": true, 00:14:48.428 "unmap": false, 00:14:48.428 "flush": false, 00:14:48.428 "reset": true, 00:14:48.428 "nvme_admin": false, 00:14:48.428 "nvme_io": false, 00:14:48.428 "nvme_io_md": false, 00:14:48.428 "write_zeroes": true, 00:14:48.428 "zcopy": false, 00:14:48.428 "get_zone_info": false, 00:14:48.428 "zone_management": false, 00:14:48.428 "zone_append": false, 00:14:48.428 "compare": false, 00:14:48.428 "compare_and_write": false, 00:14:48.428 "abort": false, 00:14:48.428 "seek_hole": false, 00:14:48.428 "seek_data": false, 00:14:48.428 "copy": false, 00:14:48.428 "nvme_iov_md": false 00:14:48.428 }, 00:14:48.428 "driver_specific": { 00:14:48.428 "raid": { 00:14:48.428 "uuid": "825ac322-9008-4b1d-9860-0718f73c5a68", 00:14:48.428 "strip_size_kb": 64, 00:14:48.428 "state": "online", 00:14:48.428 "raid_level": "raid5f", 00:14:48.428 "superblock": true, 00:14:48.428 "num_base_bdevs": 3, 00:14:48.428 "num_base_bdevs_discovered": 3, 00:14:48.428 "num_base_bdevs_operational": 3, 00:14:48.428 "base_bdevs_list": [ 00:14:48.428 { 00:14:48.428 "name": "pt1", 00:14:48.428 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:48.428 "is_configured": true, 00:14:48.428 "data_offset": 2048, 00:14:48.428 "data_size": 63488 00:14:48.428 }, 00:14:48.428 { 00:14:48.428 "name": "pt2", 00:14:48.428 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:48.428 "is_configured": true, 00:14:48.428 "data_offset": 2048, 00:14:48.428 "data_size": 63488 00:14:48.428 }, 00:14:48.428 { 00:14:48.428 "name": "pt3", 00:14:48.428 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:48.428 "is_configured": true, 00:14:48.428 "data_offset": 2048, 00:14:48.428 "data_size": 63488 00:14:48.428 } 00:14:48.428 ] 00:14:48.428 } 00:14:48.428 } 00:14:48.428 }' 00:14:48.428 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:48.428 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:48.428 pt2 00:14:48.428 pt3' 00:14:48.428 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.428 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:48.428 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:48.428 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.428 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:48.428 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.428 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.428 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.428 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:48.428 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:48.428 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:48.428 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.428 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:48.429 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.429 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.688 [2024-12-06 09:52:13.773305] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=825ac322-9008-4b1d-9860-0718f73c5a68 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 825ac322-9008-4b1d-9860-0718f73c5a68 ']' 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.688 [2024-12-06 09:52:13.821041] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:48.688 [2024-12-06 09:52:13.821106] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:48.688 [2024-12-06 09:52:13.821192] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:48.688 [2024-12-06 09:52:13.821264] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:48.688 [2024-12-06 09:52:13.821274] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:48.688 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:48.689 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:48.689 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:48.689 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:48.689 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:48.689 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:48.689 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:48.689 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.689 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.689 [2024-12-06 09:52:13.956838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:48.689 [2024-12-06 09:52:13.958696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:48.689 [2024-12-06 09:52:13.958806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:48.689 [2024-12-06 09:52:13.958860] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:48.689 [2024-12-06 09:52:13.958904] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:48.689 [2024-12-06 09:52:13.958922] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:48.689 [2024-12-06 09:52:13.958939] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:48.689 [2024-12-06 09:52:13.958948] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:48.949 request: 00:14:48.949 { 00:14:48.949 "name": "raid_bdev1", 00:14:48.949 "raid_level": "raid5f", 00:14:48.949 "base_bdevs": [ 00:14:48.949 "malloc1", 00:14:48.949 "malloc2", 00:14:48.949 "malloc3" 00:14:48.949 ], 00:14:48.949 "strip_size_kb": 64, 00:14:48.949 "superblock": false, 00:14:48.949 "method": "bdev_raid_create", 00:14:48.949 "req_id": 1 00:14:48.949 } 00:14:48.949 Got JSON-RPC error response 00:14:48.949 response: 00:14:48.949 { 00:14:48.949 "code": -17, 00:14:48.949 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:48.949 } 00:14:48.950 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:48.950 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:48.950 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:48.950 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:48.950 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:48.950 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:48.950 09:52:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.950 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.950 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.950 09:52:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.950 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:48.950 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:48.950 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:48.950 09:52:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.950 09:52:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.950 [2024-12-06 09:52:14.008700] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:48.950 [2024-12-06 09:52:14.008787] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.950 [2024-12-06 09:52:14.008823] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:48.950 [2024-12-06 09:52:14.008883] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.950 [2024-12-06 09:52:14.011007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.950 [2024-12-06 09:52:14.011077] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:48.950 [2024-12-06 09:52:14.011183] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:48.950 [2024-12-06 09:52:14.011267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:48.950 pt1 00:14:48.950 09:52:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.950 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:48.950 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:48.950 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.950 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.951 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.951 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:48.951 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.951 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.951 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.951 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.951 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.951 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.951 09:52:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.951 09:52:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.951 09:52:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.951 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.951 "name": "raid_bdev1", 00:14:48.951 "uuid": "825ac322-9008-4b1d-9860-0718f73c5a68", 00:14:48.951 "strip_size_kb": 64, 00:14:48.951 "state": "configuring", 00:14:48.951 "raid_level": "raid5f", 00:14:48.951 "superblock": true, 00:14:48.951 "num_base_bdevs": 3, 00:14:48.951 "num_base_bdevs_discovered": 1, 00:14:48.951 "num_base_bdevs_operational": 3, 00:14:48.951 "base_bdevs_list": [ 00:14:48.951 { 00:14:48.951 "name": "pt1", 00:14:48.951 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:48.951 "is_configured": true, 00:14:48.951 "data_offset": 2048, 00:14:48.951 "data_size": 63488 00:14:48.951 }, 00:14:48.951 { 00:14:48.951 "name": null, 00:14:48.951 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:48.951 "is_configured": false, 00:14:48.951 "data_offset": 2048, 00:14:48.951 "data_size": 63488 00:14:48.951 }, 00:14:48.951 { 00:14:48.952 "name": null, 00:14:48.952 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:48.952 "is_configured": false, 00:14:48.952 "data_offset": 2048, 00:14:48.952 "data_size": 63488 00:14:48.952 } 00:14:48.952 ] 00:14:48.952 }' 00:14:48.952 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.952 09:52:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.547 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:49.547 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:49.547 09:52:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.547 09:52:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.547 [2024-12-06 09:52:14.487992] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:49.547 [2024-12-06 09:52:14.488108] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.547 [2024-12-06 09:52:14.488135] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:49.547 [2024-12-06 09:52:14.488152] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.547 [2024-12-06 09:52:14.488579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.547 [2024-12-06 09:52:14.488604] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:49.547 [2024-12-06 09:52:14.488692] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:49.547 [2024-12-06 09:52:14.488719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:49.547 pt2 00:14:49.547 09:52:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.547 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:49.547 09:52:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.547 09:52:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.547 [2024-12-06 09:52:14.495986] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:49.547 09:52:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.547 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:49.547 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.547 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.547 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.547 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.547 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:49.547 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.547 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.547 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.547 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.547 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.547 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.547 09:52:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.547 09:52:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.547 09:52:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.547 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.547 "name": "raid_bdev1", 00:14:49.547 "uuid": "825ac322-9008-4b1d-9860-0718f73c5a68", 00:14:49.547 "strip_size_kb": 64, 00:14:49.547 "state": "configuring", 00:14:49.547 "raid_level": "raid5f", 00:14:49.547 "superblock": true, 00:14:49.547 "num_base_bdevs": 3, 00:14:49.547 "num_base_bdevs_discovered": 1, 00:14:49.547 "num_base_bdevs_operational": 3, 00:14:49.547 "base_bdevs_list": [ 00:14:49.547 { 00:14:49.547 "name": "pt1", 00:14:49.547 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:49.547 "is_configured": true, 00:14:49.547 "data_offset": 2048, 00:14:49.547 "data_size": 63488 00:14:49.547 }, 00:14:49.547 { 00:14:49.547 "name": null, 00:14:49.547 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:49.547 "is_configured": false, 00:14:49.547 "data_offset": 0, 00:14:49.547 "data_size": 63488 00:14:49.547 }, 00:14:49.547 { 00:14:49.547 "name": null, 00:14:49.547 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:49.547 "is_configured": false, 00:14:49.547 "data_offset": 2048, 00:14:49.547 "data_size": 63488 00:14:49.547 } 00:14:49.547 ] 00:14:49.547 }' 00:14:49.547 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.547 09:52:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.806 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:49.806 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:49.806 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:49.806 09:52:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.806 09:52:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.806 [2024-12-06 09:52:14.943276] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:49.806 [2024-12-06 09:52:14.943402] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.806 [2024-12-06 09:52:14.943438] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:49.806 [2024-12-06 09:52:14.943470] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.806 [2024-12-06 09:52:14.943970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.806 [2024-12-06 09:52:14.944033] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:49.806 [2024-12-06 09:52:14.944190] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:49.806 [2024-12-06 09:52:14.944246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:49.806 pt2 00:14:49.806 09:52:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.806 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:49.806 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:49.806 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:49.806 09:52:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.806 09:52:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.806 [2024-12-06 09:52:14.955242] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:49.806 [2024-12-06 09:52:14.955320] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.806 [2024-12-06 09:52:14.955349] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:49.806 [2024-12-06 09:52:14.955377] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.806 [2024-12-06 09:52:14.955802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.806 [2024-12-06 09:52:14.955889] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:49.806 [2024-12-06 09:52:14.955994] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:49.806 [2024-12-06 09:52:14.956049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:49.806 [2024-12-06 09:52:14.956252] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:49.806 [2024-12-06 09:52:14.956302] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:49.806 [2024-12-06 09:52:14.956582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:49.806 [2024-12-06 09:52:14.962172] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:49.806 [2024-12-06 09:52:14.962238] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:49.806 [2024-12-06 09:52:14.962480] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.806 pt3 00:14:49.806 09:52:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.806 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:49.806 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:49.806 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:49.806 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.806 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.806 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.806 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.806 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:49.806 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.806 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.806 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.806 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.806 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.806 09:52:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.806 09:52:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.806 09:52:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.806 09:52:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.806 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.806 "name": "raid_bdev1", 00:14:49.806 "uuid": "825ac322-9008-4b1d-9860-0718f73c5a68", 00:14:49.806 "strip_size_kb": 64, 00:14:49.806 "state": "online", 00:14:49.806 "raid_level": "raid5f", 00:14:49.806 "superblock": true, 00:14:49.806 "num_base_bdevs": 3, 00:14:49.806 "num_base_bdevs_discovered": 3, 00:14:49.806 "num_base_bdevs_operational": 3, 00:14:49.806 "base_bdevs_list": [ 00:14:49.806 { 00:14:49.806 "name": "pt1", 00:14:49.806 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:49.806 "is_configured": true, 00:14:49.806 "data_offset": 2048, 00:14:49.806 "data_size": 63488 00:14:49.806 }, 00:14:49.806 { 00:14:49.806 "name": "pt2", 00:14:49.806 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:49.806 "is_configured": true, 00:14:49.806 "data_offset": 2048, 00:14:49.806 "data_size": 63488 00:14:49.806 }, 00:14:49.806 { 00:14:49.806 "name": "pt3", 00:14:49.806 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:49.806 "is_configured": true, 00:14:49.806 "data_offset": 2048, 00:14:49.806 "data_size": 63488 00:14:49.806 } 00:14:49.806 ] 00:14:49.806 }' 00:14:49.806 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.806 09:52:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.374 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:50.374 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:50.374 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:50.374 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:50.374 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:50.374 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:50.374 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:50.374 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:50.374 09:52:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.374 09:52:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.374 [2024-12-06 09:52:15.416501] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:50.374 09:52:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.374 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:50.374 "name": "raid_bdev1", 00:14:50.374 "aliases": [ 00:14:50.374 "825ac322-9008-4b1d-9860-0718f73c5a68" 00:14:50.374 ], 00:14:50.374 "product_name": "Raid Volume", 00:14:50.374 "block_size": 512, 00:14:50.374 "num_blocks": 126976, 00:14:50.374 "uuid": "825ac322-9008-4b1d-9860-0718f73c5a68", 00:14:50.374 "assigned_rate_limits": { 00:14:50.374 "rw_ios_per_sec": 0, 00:14:50.374 "rw_mbytes_per_sec": 0, 00:14:50.374 "r_mbytes_per_sec": 0, 00:14:50.374 "w_mbytes_per_sec": 0 00:14:50.374 }, 00:14:50.374 "claimed": false, 00:14:50.374 "zoned": false, 00:14:50.374 "supported_io_types": { 00:14:50.374 "read": true, 00:14:50.374 "write": true, 00:14:50.374 "unmap": false, 00:14:50.374 "flush": false, 00:14:50.374 "reset": true, 00:14:50.374 "nvme_admin": false, 00:14:50.374 "nvme_io": false, 00:14:50.374 "nvme_io_md": false, 00:14:50.374 "write_zeroes": true, 00:14:50.374 "zcopy": false, 00:14:50.374 "get_zone_info": false, 00:14:50.374 "zone_management": false, 00:14:50.374 "zone_append": false, 00:14:50.374 "compare": false, 00:14:50.374 "compare_and_write": false, 00:14:50.374 "abort": false, 00:14:50.374 "seek_hole": false, 00:14:50.374 "seek_data": false, 00:14:50.374 "copy": false, 00:14:50.374 "nvme_iov_md": false 00:14:50.374 }, 00:14:50.374 "driver_specific": { 00:14:50.374 "raid": { 00:14:50.374 "uuid": "825ac322-9008-4b1d-9860-0718f73c5a68", 00:14:50.374 "strip_size_kb": 64, 00:14:50.374 "state": "online", 00:14:50.374 "raid_level": "raid5f", 00:14:50.374 "superblock": true, 00:14:50.374 "num_base_bdevs": 3, 00:14:50.374 "num_base_bdevs_discovered": 3, 00:14:50.374 "num_base_bdevs_operational": 3, 00:14:50.374 "base_bdevs_list": [ 00:14:50.374 { 00:14:50.374 "name": "pt1", 00:14:50.374 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:50.374 "is_configured": true, 00:14:50.374 "data_offset": 2048, 00:14:50.374 "data_size": 63488 00:14:50.374 }, 00:14:50.374 { 00:14:50.374 "name": "pt2", 00:14:50.374 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:50.374 "is_configured": true, 00:14:50.374 "data_offset": 2048, 00:14:50.374 "data_size": 63488 00:14:50.374 }, 00:14:50.374 { 00:14:50.374 "name": "pt3", 00:14:50.374 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:50.374 "is_configured": true, 00:14:50.374 "data_offset": 2048, 00:14:50.374 "data_size": 63488 00:14:50.374 } 00:14:50.374 ] 00:14:50.374 } 00:14:50.374 } 00:14:50.374 }' 00:14:50.374 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:50.374 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:50.374 pt2 00:14:50.374 pt3' 00:14:50.374 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.374 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:50.374 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.374 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:50.374 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.374 09:52:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.374 09:52:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.374 09:52:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.374 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.374 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.374 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.374 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:50.374 09:52:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.374 09:52:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.374 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.374 09:52:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:50.632 [2024-12-06 09:52:15.712115] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 825ac322-9008-4b1d-9860-0718f73c5a68 '!=' 825ac322-9008-4b1d-9860-0718f73c5a68 ']' 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.632 [2024-12-06 09:52:15.751994] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.632 "name": "raid_bdev1", 00:14:50.632 "uuid": "825ac322-9008-4b1d-9860-0718f73c5a68", 00:14:50.632 "strip_size_kb": 64, 00:14:50.632 "state": "online", 00:14:50.632 "raid_level": "raid5f", 00:14:50.632 "superblock": true, 00:14:50.632 "num_base_bdevs": 3, 00:14:50.632 "num_base_bdevs_discovered": 2, 00:14:50.632 "num_base_bdevs_operational": 2, 00:14:50.632 "base_bdevs_list": [ 00:14:50.632 { 00:14:50.632 "name": null, 00:14:50.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.632 "is_configured": false, 00:14:50.632 "data_offset": 0, 00:14:50.632 "data_size": 63488 00:14:50.632 }, 00:14:50.632 { 00:14:50.632 "name": "pt2", 00:14:50.632 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:50.632 "is_configured": true, 00:14:50.632 "data_offset": 2048, 00:14:50.632 "data_size": 63488 00:14:50.632 }, 00:14:50.632 { 00:14:50.632 "name": "pt3", 00:14:50.632 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:50.632 "is_configured": true, 00:14:50.632 "data_offset": 2048, 00:14:50.632 "data_size": 63488 00:14:50.632 } 00:14:50.632 ] 00:14:50.632 }' 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.632 09:52:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.890 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:50.890 09:52:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.890 09:52:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.890 [2024-12-06 09:52:16.115923] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:50.890 [2024-12-06 09:52:16.115998] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:50.890 [2024-12-06 09:52:16.116106] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:50.890 [2024-12-06 09:52:16.116191] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:50.890 [2024-12-06 09:52:16.116247] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:50.890 09:52:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.890 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.890 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:50.890 09:52:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.890 09:52:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.890 09:52:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.149 [2024-12-06 09:52:16.191775] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:51.149 [2024-12-06 09:52:16.191869] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.149 [2024-12-06 09:52:16.191890] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:51.149 [2024-12-06 09:52:16.191901] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.149 [2024-12-06 09:52:16.193906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.149 [2024-12-06 09:52:16.193950] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:51.149 [2024-12-06 09:52:16.194020] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:51.149 [2024-12-06 09:52:16.194068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:51.149 pt2 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.149 "name": "raid_bdev1", 00:14:51.149 "uuid": "825ac322-9008-4b1d-9860-0718f73c5a68", 00:14:51.149 "strip_size_kb": 64, 00:14:51.149 "state": "configuring", 00:14:51.149 "raid_level": "raid5f", 00:14:51.149 "superblock": true, 00:14:51.149 "num_base_bdevs": 3, 00:14:51.149 "num_base_bdevs_discovered": 1, 00:14:51.149 "num_base_bdevs_operational": 2, 00:14:51.149 "base_bdevs_list": [ 00:14:51.149 { 00:14:51.149 "name": null, 00:14:51.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.149 "is_configured": false, 00:14:51.149 "data_offset": 2048, 00:14:51.149 "data_size": 63488 00:14:51.149 }, 00:14:51.149 { 00:14:51.149 "name": "pt2", 00:14:51.149 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:51.149 "is_configured": true, 00:14:51.149 "data_offset": 2048, 00:14:51.149 "data_size": 63488 00:14:51.149 }, 00:14:51.149 { 00:14:51.149 "name": null, 00:14:51.149 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:51.149 "is_configured": false, 00:14:51.149 "data_offset": 2048, 00:14:51.149 "data_size": 63488 00:14:51.149 } 00:14:51.149 ] 00:14:51.149 }' 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.149 09:52:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.409 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:51.409 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:51.409 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:14:51.409 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:51.409 09:52:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.409 09:52:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.409 [2024-12-06 09:52:16.627018] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:51.409 [2024-12-06 09:52:16.627127] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.409 [2024-12-06 09:52:16.627188] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:51.409 [2024-12-06 09:52:16.627221] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.409 [2024-12-06 09:52:16.627682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.409 [2024-12-06 09:52:16.627719] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:51.409 [2024-12-06 09:52:16.627801] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:51.409 [2024-12-06 09:52:16.627825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:51.409 [2024-12-06 09:52:16.627973] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:51.409 [2024-12-06 09:52:16.627992] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:51.409 [2024-12-06 09:52:16.628248] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:51.409 pt3 00:14:51.409 [2024-12-06 09:52:16.633561] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:51.409 [2024-12-06 09:52:16.633582] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:51.409 [2024-12-06 09:52:16.633834] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.409 09:52:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.409 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:51.409 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.409 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.409 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.409 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.409 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:51.409 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.409 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.409 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.409 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.409 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.409 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.409 09:52:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.409 09:52:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.409 09:52:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.669 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.669 "name": "raid_bdev1", 00:14:51.669 "uuid": "825ac322-9008-4b1d-9860-0718f73c5a68", 00:14:51.669 "strip_size_kb": 64, 00:14:51.669 "state": "online", 00:14:51.669 "raid_level": "raid5f", 00:14:51.669 "superblock": true, 00:14:51.669 "num_base_bdevs": 3, 00:14:51.669 "num_base_bdevs_discovered": 2, 00:14:51.669 "num_base_bdevs_operational": 2, 00:14:51.669 "base_bdevs_list": [ 00:14:51.669 { 00:14:51.669 "name": null, 00:14:51.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.669 "is_configured": false, 00:14:51.669 "data_offset": 2048, 00:14:51.669 "data_size": 63488 00:14:51.669 }, 00:14:51.669 { 00:14:51.669 "name": "pt2", 00:14:51.669 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:51.669 "is_configured": true, 00:14:51.669 "data_offset": 2048, 00:14:51.669 "data_size": 63488 00:14:51.669 }, 00:14:51.669 { 00:14:51.669 "name": "pt3", 00:14:51.669 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:51.669 "is_configured": true, 00:14:51.669 "data_offset": 2048, 00:14:51.669 "data_size": 63488 00:14:51.669 } 00:14:51.669 ] 00:14:51.669 }' 00:14:51.669 09:52:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.669 09:52:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.929 [2024-12-06 09:52:17.083940] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:51.929 [2024-12-06 09:52:17.084006] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:51.929 [2024-12-06 09:52:17.084083] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:51.929 [2024-12-06 09:52:17.084161] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:51.929 [2024-12-06 09:52:17.084216] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.929 [2024-12-06 09:52:17.155937] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:51.929 [2024-12-06 09:52:17.155989] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.929 [2024-12-06 09:52:17.156007] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:51.929 [2024-12-06 09:52:17.156016] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.929 [2024-12-06 09:52:17.158179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.929 [2024-12-06 09:52:17.158217] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:51.929 [2024-12-06 09:52:17.158289] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:51.929 [2024-12-06 09:52:17.158341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:51.929 [2024-12-06 09:52:17.158505] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:51.929 [2024-12-06 09:52:17.158518] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:51.929 [2024-12-06 09:52:17.158533] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:51.929 [2024-12-06 09:52:17.158595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:51.929 pt1 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.929 09:52:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.190 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.190 "name": "raid_bdev1", 00:14:52.190 "uuid": "825ac322-9008-4b1d-9860-0718f73c5a68", 00:14:52.190 "strip_size_kb": 64, 00:14:52.190 "state": "configuring", 00:14:52.190 "raid_level": "raid5f", 00:14:52.190 "superblock": true, 00:14:52.190 "num_base_bdevs": 3, 00:14:52.190 "num_base_bdevs_discovered": 1, 00:14:52.190 "num_base_bdevs_operational": 2, 00:14:52.190 "base_bdevs_list": [ 00:14:52.190 { 00:14:52.190 "name": null, 00:14:52.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.190 "is_configured": false, 00:14:52.190 "data_offset": 2048, 00:14:52.190 "data_size": 63488 00:14:52.190 }, 00:14:52.190 { 00:14:52.190 "name": "pt2", 00:14:52.190 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:52.190 "is_configured": true, 00:14:52.190 "data_offset": 2048, 00:14:52.190 "data_size": 63488 00:14:52.190 }, 00:14:52.190 { 00:14:52.190 "name": null, 00:14:52.190 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:52.190 "is_configured": false, 00:14:52.190 "data_offset": 2048, 00:14:52.190 "data_size": 63488 00:14:52.190 } 00:14:52.190 ] 00:14:52.190 }' 00:14:52.190 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.190 09:52:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.450 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:52.450 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:52.450 09:52:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.450 09:52:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.450 09:52:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.450 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:52.450 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:52.450 09:52:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.450 09:52:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.450 [2024-12-06 09:52:17.671035] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:52.450 [2024-12-06 09:52:17.671171] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.450 [2024-12-06 09:52:17.671215] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:52.450 [2024-12-06 09:52:17.671248] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.450 [2024-12-06 09:52:17.671770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.450 [2024-12-06 09:52:17.671833] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:52.450 [2024-12-06 09:52:17.671962] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:52.450 [2024-12-06 09:52:17.672016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:52.450 [2024-12-06 09:52:17.672184] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:52.450 [2024-12-06 09:52:17.672223] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:52.450 [2024-12-06 09:52:17.672487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:52.450 [2024-12-06 09:52:17.678030] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:52.450 [2024-12-06 09:52:17.678096] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:52.450 [2024-12-06 09:52:17.678377] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.450 pt3 00:14:52.450 09:52:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.450 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:52.450 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.451 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.451 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.451 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.451 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:52.451 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.451 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.451 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.451 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.451 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.451 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.451 09:52:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.451 09:52:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.451 09:52:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.710 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.710 "name": "raid_bdev1", 00:14:52.710 "uuid": "825ac322-9008-4b1d-9860-0718f73c5a68", 00:14:52.710 "strip_size_kb": 64, 00:14:52.710 "state": "online", 00:14:52.710 "raid_level": "raid5f", 00:14:52.710 "superblock": true, 00:14:52.710 "num_base_bdevs": 3, 00:14:52.710 "num_base_bdevs_discovered": 2, 00:14:52.710 "num_base_bdevs_operational": 2, 00:14:52.710 "base_bdevs_list": [ 00:14:52.710 { 00:14:52.710 "name": null, 00:14:52.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.710 "is_configured": false, 00:14:52.710 "data_offset": 2048, 00:14:52.710 "data_size": 63488 00:14:52.710 }, 00:14:52.710 { 00:14:52.710 "name": "pt2", 00:14:52.710 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:52.710 "is_configured": true, 00:14:52.710 "data_offset": 2048, 00:14:52.710 "data_size": 63488 00:14:52.710 }, 00:14:52.710 { 00:14:52.710 "name": "pt3", 00:14:52.710 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:52.710 "is_configured": true, 00:14:52.710 "data_offset": 2048, 00:14:52.710 "data_size": 63488 00:14:52.710 } 00:14:52.710 ] 00:14:52.710 }' 00:14:52.710 09:52:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.711 09:52:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.971 09:52:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:52.971 09:52:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.971 09:52:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.971 09:52:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:52.971 09:52:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.971 09:52:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:52.971 09:52:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:52.971 09:52:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.971 09:52:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.971 09:52:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:52.971 [2024-12-06 09:52:18.168340] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:52.971 09:52:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.971 09:52:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 825ac322-9008-4b1d-9860-0718f73c5a68 '!=' 825ac322-9008-4b1d-9860-0718f73c5a68 ']' 00:14:52.971 09:52:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81002 00:14:52.971 09:52:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81002 ']' 00:14:52.971 09:52:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81002 00:14:52.971 09:52:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:52.971 09:52:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:52.971 09:52:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81002 00:14:53.232 09:52:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:53.232 09:52:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:53.232 09:52:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81002' 00:14:53.232 killing process with pid 81002 00:14:53.232 09:52:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81002 00:14:53.232 [2024-12-06 09:52:18.250433] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:53.232 [2024-12-06 09:52:18.250526] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:53.232 [2024-12-06 09:52:18.250587] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:53.232 [2024-12-06 09:52:18.250597] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:53.232 09:52:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81002 00:14:53.492 [2024-12-06 09:52:18.544209] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:54.439 09:52:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:54.439 00:14:54.439 real 0m7.753s 00:14:54.439 user 0m12.093s 00:14:54.439 sys 0m1.447s 00:14:54.439 09:52:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:54.439 ************************************ 00:14:54.439 END TEST raid5f_superblock_test 00:14:54.439 ************************************ 00:14:54.439 09:52:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.699 09:52:19 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:14:54.699 09:52:19 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:14:54.699 09:52:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:54.699 09:52:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:54.699 09:52:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:54.699 ************************************ 00:14:54.699 START TEST raid5f_rebuild_test 00:14:54.699 ************************************ 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81446 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81446 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81446 ']' 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:54.699 09:52:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.699 [2024-12-06 09:52:19.827361] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:14:54.699 [2024-12-06 09:52:19.827572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81446 ] 00:14:54.699 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:54.699 Zero copy mechanism will not be used. 00:14:54.958 [2024-12-06 09:52:19.982694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.958 [2024-12-06 09:52:20.096538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.218 [2024-12-06 09:52:20.285301] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:55.218 [2024-12-06 09:52:20.285385] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:55.478 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:55.478 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:55.478 09:52:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:55.478 09:52:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:55.478 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.478 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.478 BaseBdev1_malloc 00:14:55.478 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.478 09:52:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:55.478 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.478 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.478 [2024-12-06 09:52:20.694171] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:55.478 [2024-12-06 09:52:20.694228] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.478 [2024-12-06 09:52:20.694248] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:55.478 [2024-12-06 09:52:20.694258] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.478 [2024-12-06 09:52:20.696222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.478 [2024-12-06 09:52:20.696316] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:55.478 BaseBdev1 00:14:55.478 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.478 09:52:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:55.478 09:52:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:55.478 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.478 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.478 BaseBdev2_malloc 00:14:55.478 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.478 09:52:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:55.478 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.478 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.478 [2024-12-06 09:52:20.748103] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:55.478 [2024-12-06 09:52:20.748216] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.478 [2024-12-06 09:52:20.748243] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:55.478 [2024-12-06 09:52:20.748255] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.739 [2024-12-06 09:52:20.750228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.739 [2024-12-06 09:52:20.750265] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:55.739 BaseBdev2 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.739 BaseBdev3_malloc 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.739 [2024-12-06 09:52:20.809651] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:55.739 [2024-12-06 09:52:20.809697] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.739 [2024-12-06 09:52:20.809716] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:55.739 [2024-12-06 09:52:20.809726] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.739 [2024-12-06 09:52:20.811619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.739 [2024-12-06 09:52:20.811705] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:55.739 BaseBdev3 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.739 spare_malloc 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.739 spare_delay 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.739 [2024-12-06 09:52:20.874205] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:55.739 [2024-12-06 09:52:20.874290] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.739 [2024-12-06 09:52:20.874308] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:55.739 [2024-12-06 09:52:20.874319] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.739 [2024-12-06 09:52:20.876230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.739 [2024-12-06 09:52:20.876270] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:55.739 spare 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.739 [2024-12-06 09:52:20.886245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:55.739 [2024-12-06 09:52:20.887839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:55.739 [2024-12-06 09:52:20.887929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:55.739 [2024-12-06 09:52:20.888007] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:55.739 [2024-12-06 09:52:20.888017] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:55.739 [2024-12-06 09:52:20.888256] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:55.739 [2024-12-06 09:52:20.893534] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:55.739 [2024-12-06 09:52:20.893604] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:55.739 [2024-12-06 09:52:20.893771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.739 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.740 09:52:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.740 "name": "raid_bdev1", 00:14:55.740 "uuid": "969e65d7-83bc-4ab6-8ac7-2752b4c3ff41", 00:14:55.740 "strip_size_kb": 64, 00:14:55.740 "state": "online", 00:14:55.740 "raid_level": "raid5f", 00:14:55.740 "superblock": false, 00:14:55.740 "num_base_bdevs": 3, 00:14:55.740 "num_base_bdevs_discovered": 3, 00:14:55.740 "num_base_bdevs_operational": 3, 00:14:55.740 "base_bdevs_list": [ 00:14:55.740 { 00:14:55.740 "name": "BaseBdev1", 00:14:55.740 "uuid": "0ab84d60-4079-5962-85ba-0390251e7395", 00:14:55.740 "is_configured": true, 00:14:55.740 "data_offset": 0, 00:14:55.740 "data_size": 65536 00:14:55.740 }, 00:14:55.740 { 00:14:55.740 "name": "BaseBdev2", 00:14:55.740 "uuid": "453ae2cc-88c2-5863-85c4-6134656ed4dc", 00:14:55.740 "is_configured": true, 00:14:55.740 "data_offset": 0, 00:14:55.740 "data_size": 65536 00:14:55.740 }, 00:14:55.740 { 00:14:55.740 "name": "BaseBdev3", 00:14:55.740 "uuid": "c65cbbf8-5b8c-5583-9e8a-400afe1d98cd", 00:14:55.740 "is_configured": true, 00:14:55.740 "data_offset": 0, 00:14:55.740 "data_size": 65536 00:14:55.740 } 00:14:55.740 ] 00:14:55.740 }' 00:14:55.740 09:52:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.740 09:52:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.308 09:52:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:56.308 09:52:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:56.308 09:52:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.308 09:52:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.308 [2024-12-06 09:52:21.331136] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:56.308 09:52:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.308 09:52:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:14:56.308 09:52:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:56.308 09:52:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.308 09:52:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.308 09:52:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.308 09:52:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.308 09:52:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:56.308 09:52:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:56.308 09:52:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:56.308 09:52:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:56.308 09:52:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:56.308 09:52:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:56.308 09:52:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:56.308 09:52:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:56.308 09:52:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:56.308 09:52:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:56.308 09:52:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:56.308 09:52:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:56.308 09:52:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:56.308 09:52:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:56.568 [2024-12-06 09:52:21.606524] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:56.568 /dev/nbd0 00:14:56.568 09:52:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:56.568 09:52:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:56.568 09:52:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:56.568 09:52:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:56.568 09:52:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:56.568 09:52:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:56.568 09:52:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:56.568 09:52:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:56.568 09:52:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:56.568 09:52:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:56.568 09:52:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:56.568 1+0 records in 00:14:56.568 1+0 records out 00:14:56.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335169 s, 12.2 MB/s 00:14:56.568 09:52:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.568 09:52:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:56.568 09:52:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:56.568 09:52:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:56.568 09:52:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:56.568 09:52:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:56.568 09:52:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:56.568 09:52:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:56.568 09:52:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:56.568 09:52:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:56.568 09:52:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:14:56.827 512+0 records in 00:14:56.827 512+0 records out 00:14:56.827 67108864 bytes (67 MB, 64 MiB) copied, 0.364055 s, 184 MB/s 00:14:56.827 09:52:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:56.827 09:52:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:56.827 09:52:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:56.827 09:52:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:56.827 09:52:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:56.827 09:52:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:56.827 09:52:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:57.087 09:52:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:57.087 09:52:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:57.087 09:52:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:57.087 09:52:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:57.087 09:52:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:57.087 09:52:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:57.087 [2024-12-06 09:52:22.257592] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.087 09:52:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:57.087 09:52:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:57.087 09:52:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:57.087 09:52:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.087 09:52:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.087 [2024-12-06 09:52:22.268897] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:57.087 09:52:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.087 09:52:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:57.087 09:52:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:57.087 09:52:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.087 09:52:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.087 09:52:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.087 09:52:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:57.087 09:52:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.087 09:52:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.087 09:52:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.087 09:52:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.087 09:52:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.087 09:52:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.087 09:52:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.087 09:52:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.087 09:52:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.087 09:52:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.087 "name": "raid_bdev1", 00:14:57.087 "uuid": "969e65d7-83bc-4ab6-8ac7-2752b4c3ff41", 00:14:57.087 "strip_size_kb": 64, 00:14:57.087 "state": "online", 00:14:57.087 "raid_level": "raid5f", 00:14:57.087 "superblock": false, 00:14:57.087 "num_base_bdevs": 3, 00:14:57.087 "num_base_bdevs_discovered": 2, 00:14:57.087 "num_base_bdevs_operational": 2, 00:14:57.087 "base_bdevs_list": [ 00:14:57.087 { 00:14:57.087 "name": null, 00:14:57.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.087 "is_configured": false, 00:14:57.087 "data_offset": 0, 00:14:57.087 "data_size": 65536 00:14:57.087 }, 00:14:57.087 { 00:14:57.087 "name": "BaseBdev2", 00:14:57.087 "uuid": "453ae2cc-88c2-5863-85c4-6134656ed4dc", 00:14:57.087 "is_configured": true, 00:14:57.087 "data_offset": 0, 00:14:57.087 "data_size": 65536 00:14:57.087 }, 00:14:57.087 { 00:14:57.087 "name": "BaseBdev3", 00:14:57.087 "uuid": "c65cbbf8-5b8c-5583-9e8a-400afe1d98cd", 00:14:57.087 "is_configured": true, 00:14:57.087 "data_offset": 0, 00:14:57.087 "data_size": 65536 00:14:57.087 } 00:14:57.087 ] 00:14:57.087 }' 00:14:57.087 09:52:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.087 09:52:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.654 09:52:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:57.654 09:52:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.654 09:52:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.654 [2024-12-06 09:52:22.736079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:57.654 [2024-12-06 09:52:22.752400] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:14:57.654 09:52:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.654 09:52:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:57.654 [2024-12-06 09:52:22.759899] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:58.592 09:52:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:58.592 09:52:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.592 09:52:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:58.592 09:52:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:58.592 09:52:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.592 09:52:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.592 09:52:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.592 09:52:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.592 09:52:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.592 09:52:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.592 09:52:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.592 "name": "raid_bdev1", 00:14:58.592 "uuid": "969e65d7-83bc-4ab6-8ac7-2752b4c3ff41", 00:14:58.592 "strip_size_kb": 64, 00:14:58.592 "state": "online", 00:14:58.592 "raid_level": "raid5f", 00:14:58.592 "superblock": false, 00:14:58.592 "num_base_bdevs": 3, 00:14:58.592 "num_base_bdevs_discovered": 3, 00:14:58.592 "num_base_bdevs_operational": 3, 00:14:58.592 "process": { 00:14:58.592 "type": "rebuild", 00:14:58.592 "target": "spare", 00:14:58.592 "progress": { 00:14:58.592 "blocks": 20480, 00:14:58.592 "percent": 15 00:14:58.592 } 00:14:58.592 }, 00:14:58.592 "base_bdevs_list": [ 00:14:58.592 { 00:14:58.592 "name": "spare", 00:14:58.592 "uuid": "fde12681-4c44-52f2-8816-2daaacb3f8be", 00:14:58.592 "is_configured": true, 00:14:58.592 "data_offset": 0, 00:14:58.592 "data_size": 65536 00:14:58.592 }, 00:14:58.592 { 00:14:58.592 "name": "BaseBdev2", 00:14:58.592 "uuid": "453ae2cc-88c2-5863-85c4-6134656ed4dc", 00:14:58.592 "is_configured": true, 00:14:58.592 "data_offset": 0, 00:14:58.592 "data_size": 65536 00:14:58.592 }, 00:14:58.592 { 00:14:58.592 "name": "BaseBdev3", 00:14:58.592 "uuid": "c65cbbf8-5b8c-5583-9e8a-400afe1d98cd", 00:14:58.592 "is_configured": true, 00:14:58.592 "data_offset": 0, 00:14:58.592 "data_size": 65536 00:14:58.592 } 00:14:58.592 ] 00:14:58.592 }' 00:14:58.592 09:52:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.592 09:52:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:58.851 09:52:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.851 09:52:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:58.851 09:52:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:58.851 09:52:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.851 09:52:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.851 [2024-12-06 09:52:23.919041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:58.851 [2024-12-06 09:52:23.967944] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:58.851 [2024-12-06 09:52:23.968001] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.851 [2024-12-06 09:52:23.968019] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:58.851 [2024-12-06 09:52:23.968026] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:58.851 09:52:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.851 09:52:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:58.851 09:52:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.851 09:52:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.851 09:52:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.851 09:52:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.851 09:52:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:58.851 09:52:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.851 09:52:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.851 09:52:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.851 09:52:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.851 09:52:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.851 09:52:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.851 09:52:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.851 09:52:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.851 09:52:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.851 09:52:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.851 "name": "raid_bdev1", 00:14:58.851 "uuid": "969e65d7-83bc-4ab6-8ac7-2752b4c3ff41", 00:14:58.851 "strip_size_kb": 64, 00:14:58.851 "state": "online", 00:14:58.851 "raid_level": "raid5f", 00:14:58.851 "superblock": false, 00:14:58.851 "num_base_bdevs": 3, 00:14:58.851 "num_base_bdevs_discovered": 2, 00:14:58.851 "num_base_bdevs_operational": 2, 00:14:58.851 "base_bdevs_list": [ 00:14:58.851 { 00:14:58.851 "name": null, 00:14:58.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.851 "is_configured": false, 00:14:58.851 "data_offset": 0, 00:14:58.851 "data_size": 65536 00:14:58.851 }, 00:14:58.851 { 00:14:58.851 "name": "BaseBdev2", 00:14:58.851 "uuid": "453ae2cc-88c2-5863-85c4-6134656ed4dc", 00:14:58.851 "is_configured": true, 00:14:58.851 "data_offset": 0, 00:14:58.851 "data_size": 65536 00:14:58.851 }, 00:14:58.851 { 00:14:58.851 "name": "BaseBdev3", 00:14:58.851 "uuid": "c65cbbf8-5b8c-5583-9e8a-400afe1d98cd", 00:14:58.852 "is_configured": true, 00:14:58.852 "data_offset": 0, 00:14:58.852 "data_size": 65536 00:14:58.852 } 00:14:58.852 ] 00:14:58.852 }' 00:14:58.852 09:52:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.852 09:52:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.418 09:52:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:59.418 09:52:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.418 09:52:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:59.418 09:52:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:59.418 09:52:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.418 09:52:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.418 09:52:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.418 09:52:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.418 09:52:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.418 09:52:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.418 09:52:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.418 "name": "raid_bdev1", 00:14:59.418 "uuid": "969e65d7-83bc-4ab6-8ac7-2752b4c3ff41", 00:14:59.418 "strip_size_kb": 64, 00:14:59.418 "state": "online", 00:14:59.418 "raid_level": "raid5f", 00:14:59.418 "superblock": false, 00:14:59.418 "num_base_bdevs": 3, 00:14:59.418 "num_base_bdevs_discovered": 2, 00:14:59.418 "num_base_bdevs_operational": 2, 00:14:59.418 "base_bdevs_list": [ 00:14:59.418 { 00:14:59.418 "name": null, 00:14:59.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.418 "is_configured": false, 00:14:59.418 "data_offset": 0, 00:14:59.418 "data_size": 65536 00:14:59.418 }, 00:14:59.418 { 00:14:59.418 "name": "BaseBdev2", 00:14:59.418 "uuid": "453ae2cc-88c2-5863-85c4-6134656ed4dc", 00:14:59.418 "is_configured": true, 00:14:59.418 "data_offset": 0, 00:14:59.418 "data_size": 65536 00:14:59.418 }, 00:14:59.418 { 00:14:59.418 "name": "BaseBdev3", 00:14:59.418 "uuid": "c65cbbf8-5b8c-5583-9e8a-400afe1d98cd", 00:14:59.418 "is_configured": true, 00:14:59.418 "data_offset": 0, 00:14:59.418 "data_size": 65536 00:14:59.418 } 00:14:59.418 ] 00:14:59.418 }' 00:14:59.418 09:52:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.418 09:52:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:59.418 09:52:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.418 09:52:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:59.418 09:52:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:59.418 09:52:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.418 09:52:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.418 [2024-12-06 09:52:24.585523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:59.418 [2024-12-06 09:52:24.601268] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:14:59.418 09:52:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.418 09:52:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:59.418 [2024-12-06 09:52:24.608206] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:00.355 09:52:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.355 09:52:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.355 09:52:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.355 09:52:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.356 09:52:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.356 09:52:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.356 09:52:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.356 09:52:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.356 09:52:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.615 09:52:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.615 09:52:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.615 "name": "raid_bdev1", 00:15:00.615 "uuid": "969e65d7-83bc-4ab6-8ac7-2752b4c3ff41", 00:15:00.615 "strip_size_kb": 64, 00:15:00.615 "state": "online", 00:15:00.615 "raid_level": "raid5f", 00:15:00.615 "superblock": false, 00:15:00.615 "num_base_bdevs": 3, 00:15:00.615 "num_base_bdevs_discovered": 3, 00:15:00.615 "num_base_bdevs_operational": 3, 00:15:00.615 "process": { 00:15:00.615 "type": "rebuild", 00:15:00.615 "target": "spare", 00:15:00.615 "progress": { 00:15:00.615 "blocks": 20480, 00:15:00.615 "percent": 15 00:15:00.615 } 00:15:00.615 }, 00:15:00.615 "base_bdevs_list": [ 00:15:00.615 { 00:15:00.615 "name": "spare", 00:15:00.615 "uuid": "fde12681-4c44-52f2-8816-2daaacb3f8be", 00:15:00.615 "is_configured": true, 00:15:00.615 "data_offset": 0, 00:15:00.615 "data_size": 65536 00:15:00.615 }, 00:15:00.615 { 00:15:00.615 "name": "BaseBdev2", 00:15:00.615 "uuid": "453ae2cc-88c2-5863-85c4-6134656ed4dc", 00:15:00.615 "is_configured": true, 00:15:00.615 "data_offset": 0, 00:15:00.615 "data_size": 65536 00:15:00.615 }, 00:15:00.615 { 00:15:00.615 "name": "BaseBdev3", 00:15:00.615 "uuid": "c65cbbf8-5b8c-5583-9e8a-400afe1d98cd", 00:15:00.615 "is_configured": true, 00:15:00.615 "data_offset": 0, 00:15:00.615 "data_size": 65536 00:15:00.615 } 00:15:00.615 ] 00:15:00.615 }' 00:15:00.615 09:52:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.615 09:52:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:00.615 09:52:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.615 09:52:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:00.615 09:52:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:00.615 09:52:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:00.615 09:52:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:00.615 09:52:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=539 00:15:00.615 09:52:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:00.615 09:52:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.615 09:52:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.615 09:52:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.615 09:52:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.615 09:52:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.615 09:52:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.615 09:52:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.615 09:52:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.615 09:52:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.615 09:52:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.615 09:52:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.615 "name": "raid_bdev1", 00:15:00.615 "uuid": "969e65d7-83bc-4ab6-8ac7-2752b4c3ff41", 00:15:00.615 "strip_size_kb": 64, 00:15:00.615 "state": "online", 00:15:00.615 "raid_level": "raid5f", 00:15:00.615 "superblock": false, 00:15:00.615 "num_base_bdevs": 3, 00:15:00.615 "num_base_bdevs_discovered": 3, 00:15:00.615 "num_base_bdevs_operational": 3, 00:15:00.615 "process": { 00:15:00.615 "type": "rebuild", 00:15:00.615 "target": "spare", 00:15:00.615 "progress": { 00:15:00.615 "blocks": 22528, 00:15:00.615 "percent": 17 00:15:00.615 } 00:15:00.615 }, 00:15:00.615 "base_bdevs_list": [ 00:15:00.615 { 00:15:00.615 "name": "spare", 00:15:00.615 "uuid": "fde12681-4c44-52f2-8816-2daaacb3f8be", 00:15:00.615 "is_configured": true, 00:15:00.615 "data_offset": 0, 00:15:00.615 "data_size": 65536 00:15:00.615 }, 00:15:00.615 { 00:15:00.615 "name": "BaseBdev2", 00:15:00.615 "uuid": "453ae2cc-88c2-5863-85c4-6134656ed4dc", 00:15:00.615 "is_configured": true, 00:15:00.615 "data_offset": 0, 00:15:00.615 "data_size": 65536 00:15:00.615 }, 00:15:00.615 { 00:15:00.615 "name": "BaseBdev3", 00:15:00.615 "uuid": "c65cbbf8-5b8c-5583-9e8a-400afe1d98cd", 00:15:00.615 "is_configured": true, 00:15:00.615 "data_offset": 0, 00:15:00.615 "data_size": 65536 00:15:00.615 } 00:15:00.615 ] 00:15:00.615 }' 00:15:00.615 09:52:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.615 09:52:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:00.615 09:52:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.615 09:52:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:00.615 09:52:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:02.011 09:52:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:02.011 09:52:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.011 09:52:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.011 09:52:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.011 09:52:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.011 09:52:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.011 09:52:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.011 09:52:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.011 09:52:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.011 09:52:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.011 09:52:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.011 09:52:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.011 "name": "raid_bdev1", 00:15:02.011 "uuid": "969e65d7-83bc-4ab6-8ac7-2752b4c3ff41", 00:15:02.011 "strip_size_kb": 64, 00:15:02.011 "state": "online", 00:15:02.011 "raid_level": "raid5f", 00:15:02.011 "superblock": false, 00:15:02.011 "num_base_bdevs": 3, 00:15:02.011 "num_base_bdevs_discovered": 3, 00:15:02.011 "num_base_bdevs_operational": 3, 00:15:02.011 "process": { 00:15:02.011 "type": "rebuild", 00:15:02.011 "target": "spare", 00:15:02.011 "progress": { 00:15:02.011 "blocks": 45056, 00:15:02.011 "percent": 34 00:15:02.011 } 00:15:02.011 }, 00:15:02.011 "base_bdevs_list": [ 00:15:02.011 { 00:15:02.011 "name": "spare", 00:15:02.011 "uuid": "fde12681-4c44-52f2-8816-2daaacb3f8be", 00:15:02.011 "is_configured": true, 00:15:02.011 "data_offset": 0, 00:15:02.011 "data_size": 65536 00:15:02.011 }, 00:15:02.011 { 00:15:02.011 "name": "BaseBdev2", 00:15:02.011 "uuid": "453ae2cc-88c2-5863-85c4-6134656ed4dc", 00:15:02.011 "is_configured": true, 00:15:02.011 "data_offset": 0, 00:15:02.011 "data_size": 65536 00:15:02.011 }, 00:15:02.011 { 00:15:02.011 "name": "BaseBdev3", 00:15:02.011 "uuid": "c65cbbf8-5b8c-5583-9e8a-400afe1d98cd", 00:15:02.011 "is_configured": true, 00:15:02.011 "data_offset": 0, 00:15:02.011 "data_size": 65536 00:15:02.011 } 00:15:02.011 ] 00:15:02.011 }' 00:15:02.011 09:52:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.011 09:52:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.011 09:52:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.011 09:52:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.011 09:52:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:02.947 09:52:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:02.947 09:52:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.947 09:52:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.947 09:52:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.947 09:52:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.947 09:52:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.947 09:52:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.947 09:52:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.947 09:52:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.947 09:52:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.947 09:52:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.947 09:52:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.947 "name": "raid_bdev1", 00:15:02.947 "uuid": "969e65d7-83bc-4ab6-8ac7-2752b4c3ff41", 00:15:02.947 "strip_size_kb": 64, 00:15:02.947 "state": "online", 00:15:02.947 "raid_level": "raid5f", 00:15:02.947 "superblock": false, 00:15:02.947 "num_base_bdevs": 3, 00:15:02.947 "num_base_bdevs_discovered": 3, 00:15:02.947 "num_base_bdevs_operational": 3, 00:15:02.947 "process": { 00:15:02.947 "type": "rebuild", 00:15:02.947 "target": "spare", 00:15:02.947 "progress": { 00:15:02.947 "blocks": 69632, 00:15:02.947 "percent": 53 00:15:02.947 } 00:15:02.947 }, 00:15:02.947 "base_bdevs_list": [ 00:15:02.947 { 00:15:02.947 "name": "spare", 00:15:02.947 "uuid": "fde12681-4c44-52f2-8816-2daaacb3f8be", 00:15:02.947 "is_configured": true, 00:15:02.947 "data_offset": 0, 00:15:02.947 "data_size": 65536 00:15:02.947 }, 00:15:02.947 { 00:15:02.947 "name": "BaseBdev2", 00:15:02.947 "uuid": "453ae2cc-88c2-5863-85c4-6134656ed4dc", 00:15:02.947 "is_configured": true, 00:15:02.947 "data_offset": 0, 00:15:02.947 "data_size": 65536 00:15:02.947 }, 00:15:02.947 { 00:15:02.947 "name": "BaseBdev3", 00:15:02.947 "uuid": "c65cbbf8-5b8c-5583-9e8a-400afe1d98cd", 00:15:02.947 "is_configured": true, 00:15:02.947 "data_offset": 0, 00:15:02.947 "data_size": 65536 00:15:02.947 } 00:15:02.947 ] 00:15:02.947 }' 00:15:02.947 09:52:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.947 09:52:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:02.947 09:52:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.947 09:52:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:02.947 09:52:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:04.330 09:52:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:04.330 09:52:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.330 09:52:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.330 09:52:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.330 09:52:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.330 09:52:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.330 09:52:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.330 09:52:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.330 09:52:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.330 09:52:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.330 09:52:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.330 09:52:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.330 "name": "raid_bdev1", 00:15:04.330 "uuid": "969e65d7-83bc-4ab6-8ac7-2752b4c3ff41", 00:15:04.330 "strip_size_kb": 64, 00:15:04.330 "state": "online", 00:15:04.330 "raid_level": "raid5f", 00:15:04.330 "superblock": false, 00:15:04.330 "num_base_bdevs": 3, 00:15:04.330 "num_base_bdevs_discovered": 3, 00:15:04.330 "num_base_bdevs_operational": 3, 00:15:04.330 "process": { 00:15:04.330 "type": "rebuild", 00:15:04.330 "target": "spare", 00:15:04.330 "progress": { 00:15:04.330 "blocks": 92160, 00:15:04.330 "percent": 70 00:15:04.330 } 00:15:04.330 }, 00:15:04.330 "base_bdevs_list": [ 00:15:04.330 { 00:15:04.330 "name": "spare", 00:15:04.330 "uuid": "fde12681-4c44-52f2-8816-2daaacb3f8be", 00:15:04.330 "is_configured": true, 00:15:04.330 "data_offset": 0, 00:15:04.330 "data_size": 65536 00:15:04.330 }, 00:15:04.330 { 00:15:04.330 "name": "BaseBdev2", 00:15:04.330 "uuid": "453ae2cc-88c2-5863-85c4-6134656ed4dc", 00:15:04.330 "is_configured": true, 00:15:04.330 "data_offset": 0, 00:15:04.330 "data_size": 65536 00:15:04.330 }, 00:15:04.330 { 00:15:04.330 "name": "BaseBdev3", 00:15:04.330 "uuid": "c65cbbf8-5b8c-5583-9e8a-400afe1d98cd", 00:15:04.330 "is_configured": true, 00:15:04.330 "data_offset": 0, 00:15:04.330 "data_size": 65536 00:15:04.330 } 00:15:04.330 ] 00:15:04.330 }' 00:15:04.330 09:52:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.330 09:52:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.330 09:52:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.330 09:52:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.330 09:52:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:05.269 09:52:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:05.269 09:52:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.269 09:52:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.269 09:52:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.270 09:52:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.270 09:52:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.270 09:52:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.270 09:52:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.270 09:52:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.270 09:52:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.270 09:52:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.270 09:52:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.270 "name": "raid_bdev1", 00:15:05.270 "uuid": "969e65d7-83bc-4ab6-8ac7-2752b4c3ff41", 00:15:05.270 "strip_size_kb": 64, 00:15:05.270 "state": "online", 00:15:05.270 "raid_level": "raid5f", 00:15:05.270 "superblock": false, 00:15:05.270 "num_base_bdevs": 3, 00:15:05.270 "num_base_bdevs_discovered": 3, 00:15:05.270 "num_base_bdevs_operational": 3, 00:15:05.270 "process": { 00:15:05.270 "type": "rebuild", 00:15:05.270 "target": "spare", 00:15:05.270 "progress": { 00:15:05.270 "blocks": 114688, 00:15:05.270 "percent": 87 00:15:05.270 } 00:15:05.270 }, 00:15:05.270 "base_bdevs_list": [ 00:15:05.270 { 00:15:05.270 "name": "spare", 00:15:05.270 "uuid": "fde12681-4c44-52f2-8816-2daaacb3f8be", 00:15:05.270 "is_configured": true, 00:15:05.270 "data_offset": 0, 00:15:05.270 "data_size": 65536 00:15:05.270 }, 00:15:05.270 { 00:15:05.270 "name": "BaseBdev2", 00:15:05.270 "uuid": "453ae2cc-88c2-5863-85c4-6134656ed4dc", 00:15:05.270 "is_configured": true, 00:15:05.270 "data_offset": 0, 00:15:05.270 "data_size": 65536 00:15:05.270 }, 00:15:05.270 { 00:15:05.270 "name": "BaseBdev3", 00:15:05.270 "uuid": "c65cbbf8-5b8c-5583-9e8a-400afe1d98cd", 00:15:05.270 "is_configured": true, 00:15:05.270 "data_offset": 0, 00:15:05.270 "data_size": 65536 00:15:05.270 } 00:15:05.270 ] 00:15:05.270 }' 00:15:05.270 09:52:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.270 09:52:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.270 09:52:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.270 09:52:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.270 09:52:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:05.839 [2024-12-06 09:52:31.050390] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:05.839 [2024-12-06 09:52:31.050575] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:05.839 [2024-12-06 09:52:31.050657] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.410 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:06.410 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.410 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.410 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.410 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.410 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.410 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.410 09:52:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.410 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.410 09:52:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.410 09:52:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.410 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.410 "name": "raid_bdev1", 00:15:06.410 "uuid": "969e65d7-83bc-4ab6-8ac7-2752b4c3ff41", 00:15:06.410 "strip_size_kb": 64, 00:15:06.410 "state": "online", 00:15:06.410 "raid_level": "raid5f", 00:15:06.410 "superblock": false, 00:15:06.410 "num_base_bdevs": 3, 00:15:06.410 "num_base_bdevs_discovered": 3, 00:15:06.410 "num_base_bdevs_operational": 3, 00:15:06.410 "base_bdevs_list": [ 00:15:06.410 { 00:15:06.410 "name": "spare", 00:15:06.410 "uuid": "fde12681-4c44-52f2-8816-2daaacb3f8be", 00:15:06.410 "is_configured": true, 00:15:06.410 "data_offset": 0, 00:15:06.410 "data_size": 65536 00:15:06.410 }, 00:15:06.410 { 00:15:06.410 "name": "BaseBdev2", 00:15:06.410 "uuid": "453ae2cc-88c2-5863-85c4-6134656ed4dc", 00:15:06.410 "is_configured": true, 00:15:06.410 "data_offset": 0, 00:15:06.410 "data_size": 65536 00:15:06.410 }, 00:15:06.410 { 00:15:06.410 "name": "BaseBdev3", 00:15:06.410 "uuid": "c65cbbf8-5b8c-5583-9e8a-400afe1d98cd", 00:15:06.410 "is_configured": true, 00:15:06.410 "data_offset": 0, 00:15:06.410 "data_size": 65536 00:15:06.410 } 00:15:06.410 ] 00:15:06.410 }' 00:15:06.410 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.410 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:06.410 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.410 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:06.410 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:06.410 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:06.410 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.410 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:06.410 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:06.410 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.410 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.410 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.410 09:52:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.410 09:52:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.410 09:52:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.410 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.410 "name": "raid_bdev1", 00:15:06.410 "uuid": "969e65d7-83bc-4ab6-8ac7-2752b4c3ff41", 00:15:06.410 "strip_size_kb": 64, 00:15:06.410 "state": "online", 00:15:06.410 "raid_level": "raid5f", 00:15:06.410 "superblock": false, 00:15:06.410 "num_base_bdevs": 3, 00:15:06.410 "num_base_bdevs_discovered": 3, 00:15:06.410 "num_base_bdevs_operational": 3, 00:15:06.410 "base_bdevs_list": [ 00:15:06.410 { 00:15:06.410 "name": "spare", 00:15:06.410 "uuid": "fde12681-4c44-52f2-8816-2daaacb3f8be", 00:15:06.410 "is_configured": true, 00:15:06.410 "data_offset": 0, 00:15:06.410 "data_size": 65536 00:15:06.410 }, 00:15:06.410 { 00:15:06.410 "name": "BaseBdev2", 00:15:06.410 "uuid": "453ae2cc-88c2-5863-85c4-6134656ed4dc", 00:15:06.410 "is_configured": true, 00:15:06.410 "data_offset": 0, 00:15:06.410 "data_size": 65536 00:15:06.410 }, 00:15:06.410 { 00:15:06.410 "name": "BaseBdev3", 00:15:06.410 "uuid": "c65cbbf8-5b8c-5583-9e8a-400afe1d98cd", 00:15:06.410 "is_configured": true, 00:15:06.410 "data_offset": 0, 00:15:06.410 "data_size": 65536 00:15:06.410 } 00:15:06.410 ] 00:15:06.410 }' 00:15:06.410 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.670 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:06.670 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.670 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:06.670 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:06.670 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.670 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.670 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.670 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.670 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.670 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.670 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.670 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.670 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.670 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.670 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.670 09:52:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.670 09:52:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.670 09:52:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.670 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.670 "name": "raid_bdev1", 00:15:06.670 "uuid": "969e65d7-83bc-4ab6-8ac7-2752b4c3ff41", 00:15:06.670 "strip_size_kb": 64, 00:15:06.670 "state": "online", 00:15:06.670 "raid_level": "raid5f", 00:15:06.670 "superblock": false, 00:15:06.670 "num_base_bdevs": 3, 00:15:06.670 "num_base_bdevs_discovered": 3, 00:15:06.670 "num_base_bdevs_operational": 3, 00:15:06.670 "base_bdevs_list": [ 00:15:06.670 { 00:15:06.670 "name": "spare", 00:15:06.670 "uuid": "fde12681-4c44-52f2-8816-2daaacb3f8be", 00:15:06.670 "is_configured": true, 00:15:06.670 "data_offset": 0, 00:15:06.670 "data_size": 65536 00:15:06.670 }, 00:15:06.670 { 00:15:06.670 "name": "BaseBdev2", 00:15:06.670 "uuid": "453ae2cc-88c2-5863-85c4-6134656ed4dc", 00:15:06.670 "is_configured": true, 00:15:06.670 "data_offset": 0, 00:15:06.670 "data_size": 65536 00:15:06.670 }, 00:15:06.670 { 00:15:06.670 "name": "BaseBdev3", 00:15:06.670 "uuid": "c65cbbf8-5b8c-5583-9e8a-400afe1d98cd", 00:15:06.670 "is_configured": true, 00:15:06.670 "data_offset": 0, 00:15:06.670 "data_size": 65536 00:15:06.670 } 00:15:06.670 ] 00:15:06.670 }' 00:15:06.670 09:52:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.670 09:52:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.930 09:52:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:06.930 09:52:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.930 09:52:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.930 [2024-12-06 09:52:32.196181] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:06.930 [2024-12-06 09:52:32.196254] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:06.930 [2024-12-06 09:52:32.196356] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:06.930 [2024-12-06 09:52:32.196470] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:06.930 [2024-12-06 09:52:32.196522] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:06.930 09:52:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.189 09:52:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.189 09:52:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.189 09:52:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.189 09:52:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:07.189 09:52:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.189 09:52:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:07.189 09:52:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:07.190 09:52:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:07.190 09:52:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:07.190 09:52:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:07.190 09:52:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:07.190 09:52:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:07.190 09:52:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:07.190 09:52:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:07.190 09:52:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:07.190 09:52:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:07.190 09:52:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:07.190 09:52:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:07.190 /dev/nbd0 00:15:07.450 09:52:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:07.450 09:52:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:07.450 09:52:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:07.450 09:52:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:07.450 09:52:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:07.450 09:52:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:07.450 09:52:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:07.450 09:52:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:07.450 09:52:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:07.450 09:52:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:07.450 09:52:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:07.450 1+0 records in 00:15:07.450 1+0 records out 00:15:07.450 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034391 s, 11.9 MB/s 00:15:07.450 09:52:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.450 09:52:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:07.450 09:52:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.450 09:52:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:07.450 09:52:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:07.450 09:52:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:07.450 09:52:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:07.450 09:52:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:07.450 /dev/nbd1 00:15:07.709 09:52:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:07.709 09:52:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:07.709 09:52:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:07.709 09:52:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:07.709 09:52:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:07.709 09:52:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:07.709 09:52:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:07.709 09:52:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:07.709 09:52:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:07.709 09:52:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:07.709 09:52:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:07.709 1+0 records in 00:15:07.709 1+0 records out 00:15:07.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413918 s, 9.9 MB/s 00:15:07.709 09:52:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.709 09:52:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:07.709 09:52:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:07.709 09:52:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:07.709 09:52:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:07.709 09:52:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:07.709 09:52:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:07.709 09:52:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:07.709 09:52:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:07.709 09:52:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:07.709 09:52:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:07.709 09:52:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:07.709 09:52:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:07.709 09:52:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:07.709 09:52:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:07.969 09:52:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:07.969 09:52:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:07.969 09:52:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:07.969 09:52:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:07.969 09:52:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:07.969 09:52:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:07.969 09:52:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:07.969 09:52:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:07.969 09:52:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:07.969 09:52:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:08.229 09:52:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:08.229 09:52:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:08.229 09:52:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:08.229 09:52:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:08.229 09:52:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:08.229 09:52:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:08.229 09:52:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:08.229 09:52:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:08.229 09:52:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:08.229 09:52:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81446 00:15:08.229 09:52:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81446 ']' 00:15:08.229 09:52:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81446 00:15:08.229 09:52:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:08.229 09:52:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:08.229 09:52:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81446 00:15:08.229 killing process with pid 81446 00:15:08.229 Received shutdown signal, test time was about 60.000000 seconds 00:15:08.229 00:15:08.229 Latency(us) 00:15:08.229 [2024-12-06T09:52:33.502Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.229 [2024-12-06T09:52:33.502Z] =================================================================================================================== 00:15:08.229 [2024-12-06T09:52:33.502Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:08.229 09:52:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:08.229 09:52:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:08.229 09:52:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81446' 00:15:08.229 09:52:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81446 00:15:08.229 [2024-12-06 09:52:33.416280] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:08.229 09:52:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81446 00:15:08.799 [2024-12-06 09:52:33.796554] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:09.738 00:15:09.738 real 0m15.161s 00:15:09.738 user 0m18.628s 00:15:09.738 sys 0m2.026s 00:15:09.738 ************************************ 00:15:09.738 END TEST raid5f_rebuild_test 00:15:09.738 ************************************ 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.738 09:52:34 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:15:09.738 09:52:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:09.738 09:52:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:09.738 09:52:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:09.738 ************************************ 00:15:09.738 START TEST raid5f_rebuild_test_sb 00:15:09.738 ************************************ 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=81886 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 81886 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81886 ']' 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:09.738 09:52:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.998 [2024-12-06 09:52:35.059965] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:15:09.998 [2024-12-06 09:52:35.060185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81886 ] 00:15:09.998 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:09.998 Zero copy mechanism will not be used. 00:15:09.998 [2024-12-06 09:52:35.229327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.257 [2024-12-06 09:52:35.340830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.515 [2024-12-06 09:52:35.529350] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.515 [2024-12-06 09:52:35.529386] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.774 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:10.774 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:10.774 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:10.774 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:10.775 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.775 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.775 BaseBdev1_malloc 00:15:10.775 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.775 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:10.775 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.775 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.775 [2024-12-06 09:52:35.925354] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:10.775 [2024-12-06 09:52:35.925415] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.775 [2024-12-06 09:52:35.925446] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:10.775 [2024-12-06 09:52:35.925457] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.775 [2024-12-06 09:52:35.927378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.775 [2024-12-06 09:52:35.927416] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:10.775 BaseBdev1 00:15:10.775 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.775 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:10.775 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:10.775 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.775 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.775 BaseBdev2_malloc 00:15:10.775 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.775 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:10.775 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.775 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.775 [2024-12-06 09:52:35.979197] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:10.775 [2024-12-06 09:52:35.979249] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.775 [2024-12-06 09:52:35.979272] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:10.775 [2024-12-06 09:52:35.979283] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.775 [2024-12-06 09:52:35.981266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.775 [2024-12-06 09:52:35.981303] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:10.775 BaseBdev2 00:15:10.775 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.775 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:10.775 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:10.775 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.775 09:52:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.775 BaseBdev3_malloc 00:15:10.775 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.775 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:10.775 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.775 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.037 [2024-12-06 09:52:36.049058] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:11.037 [2024-12-06 09:52:36.049110] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.037 [2024-12-06 09:52:36.049141] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:11.037 [2024-12-06 09:52:36.049162] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.037 [2024-12-06 09:52:36.051075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.037 [2024-12-06 09:52:36.051158] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:11.037 BaseBdev3 00:15:11.037 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.037 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:11.037 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.037 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.037 spare_malloc 00:15:11.037 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.037 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:11.037 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.037 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.037 spare_delay 00:15:11.037 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.037 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:11.037 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.037 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.037 [2024-12-06 09:52:36.114230] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:11.037 [2024-12-06 09:52:36.114314] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.037 [2024-12-06 09:52:36.114334] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:11.037 [2024-12-06 09:52:36.114344] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.037 [2024-12-06 09:52:36.116326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.037 [2024-12-06 09:52:36.116369] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:11.037 spare 00:15:11.037 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.037 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:11.037 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.037 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.037 [2024-12-06 09:52:36.126273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:11.037 [2024-12-06 09:52:36.127966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:11.037 [2024-12-06 09:52:36.128028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:11.037 [2024-12-06 09:52:36.128205] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:11.037 [2024-12-06 09:52:36.128218] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:11.037 [2024-12-06 09:52:36.128459] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:11.037 [2024-12-06 09:52:36.133993] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:11.037 [2024-12-06 09:52:36.134047] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:11.037 [2024-12-06 09:52:36.134268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.037 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.037 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:11.037 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.037 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.037 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.037 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.037 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.037 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.037 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.037 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.037 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.037 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.037 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.037 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.037 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.038 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.038 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.038 "name": "raid_bdev1", 00:15:11.038 "uuid": "67c5e324-732a-45dc-80bc-c846cd1ba126", 00:15:11.038 "strip_size_kb": 64, 00:15:11.038 "state": "online", 00:15:11.038 "raid_level": "raid5f", 00:15:11.038 "superblock": true, 00:15:11.038 "num_base_bdevs": 3, 00:15:11.038 "num_base_bdevs_discovered": 3, 00:15:11.038 "num_base_bdevs_operational": 3, 00:15:11.038 "base_bdevs_list": [ 00:15:11.038 { 00:15:11.038 "name": "BaseBdev1", 00:15:11.038 "uuid": "2d13f6bf-e7be-5cbd-9968-84cc34ad3a63", 00:15:11.038 "is_configured": true, 00:15:11.038 "data_offset": 2048, 00:15:11.038 "data_size": 63488 00:15:11.038 }, 00:15:11.038 { 00:15:11.038 "name": "BaseBdev2", 00:15:11.038 "uuid": "7eee5e8d-d494-5266-b79c-f51f1d0d8424", 00:15:11.038 "is_configured": true, 00:15:11.038 "data_offset": 2048, 00:15:11.038 "data_size": 63488 00:15:11.038 }, 00:15:11.038 { 00:15:11.038 "name": "BaseBdev3", 00:15:11.038 "uuid": "de858497-606c-5e57-abd0-0a29af0a879c", 00:15:11.038 "is_configured": true, 00:15:11.038 "data_offset": 2048, 00:15:11.038 "data_size": 63488 00:15:11.038 } 00:15:11.038 ] 00:15:11.038 }' 00:15:11.038 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.038 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.608 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:11.608 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:11.608 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.608 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.608 [2024-12-06 09:52:36.595930] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:11.608 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.608 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:15:11.608 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.608 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.608 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:11.608 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.608 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.608 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:11.608 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:11.608 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:11.608 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:11.608 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:11.608 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:11.608 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:11.608 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:11.608 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:11.608 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:11.608 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:11.608 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:11.608 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:11.608 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:11.608 [2024-12-06 09:52:36.859336] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:11.867 /dev/nbd0 00:15:11.868 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:11.868 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:11.868 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:11.868 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:11.868 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:11.868 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:11.868 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:11.868 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:11.868 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:11.868 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:11.868 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:11.868 1+0 records in 00:15:11.868 1+0 records out 00:15:11.868 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469323 s, 8.7 MB/s 00:15:11.868 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:11.868 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:11.868 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:11.868 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:11.868 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:11.868 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:11.868 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:11.868 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:11.868 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:11.868 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:11.868 09:52:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:15:12.128 496+0 records in 00:15:12.128 496+0 records out 00:15:12.129 65011712 bytes (65 MB, 62 MiB) copied, 0.365564 s, 178 MB/s 00:15:12.129 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:12.129 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:12.129 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:12.129 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:12.129 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:12.129 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:12.129 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:12.389 [2024-12-06 09:52:37.489698] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.389 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:12.389 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:12.389 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:12.389 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:12.389 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:12.389 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:12.389 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:12.389 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:12.389 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:12.389 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.389 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.389 [2024-12-06 09:52:37.525225] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:12.389 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.389 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:12.389 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.389 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.389 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.389 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.389 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:12.389 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.389 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.389 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.389 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.389 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.389 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.389 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.389 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.389 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.389 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.389 "name": "raid_bdev1", 00:15:12.389 "uuid": "67c5e324-732a-45dc-80bc-c846cd1ba126", 00:15:12.389 "strip_size_kb": 64, 00:15:12.390 "state": "online", 00:15:12.390 "raid_level": "raid5f", 00:15:12.390 "superblock": true, 00:15:12.390 "num_base_bdevs": 3, 00:15:12.390 "num_base_bdevs_discovered": 2, 00:15:12.390 "num_base_bdevs_operational": 2, 00:15:12.390 "base_bdevs_list": [ 00:15:12.390 { 00:15:12.390 "name": null, 00:15:12.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.390 "is_configured": false, 00:15:12.390 "data_offset": 0, 00:15:12.390 "data_size": 63488 00:15:12.390 }, 00:15:12.390 { 00:15:12.390 "name": "BaseBdev2", 00:15:12.390 "uuid": "7eee5e8d-d494-5266-b79c-f51f1d0d8424", 00:15:12.390 "is_configured": true, 00:15:12.390 "data_offset": 2048, 00:15:12.390 "data_size": 63488 00:15:12.390 }, 00:15:12.390 { 00:15:12.390 "name": "BaseBdev3", 00:15:12.390 "uuid": "de858497-606c-5e57-abd0-0a29af0a879c", 00:15:12.390 "is_configured": true, 00:15:12.390 "data_offset": 2048, 00:15:12.390 "data_size": 63488 00:15:12.390 } 00:15:12.390 ] 00:15:12.390 }' 00:15:12.390 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.390 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.959 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:12.959 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.959 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.959 [2024-12-06 09:52:37.952475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:12.959 [2024-12-06 09:52:37.969308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:15:12.959 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.960 09:52:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:12.960 [2024-12-06 09:52:37.976597] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:13.900 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.900 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.900 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.900 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.900 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.900 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.900 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.900 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.900 09:52:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.900 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.900 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.900 "name": "raid_bdev1", 00:15:13.900 "uuid": "67c5e324-732a-45dc-80bc-c846cd1ba126", 00:15:13.900 "strip_size_kb": 64, 00:15:13.900 "state": "online", 00:15:13.900 "raid_level": "raid5f", 00:15:13.900 "superblock": true, 00:15:13.900 "num_base_bdevs": 3, 00:15:13.900 "num_base_bdevs_discovered": 3, 00:15:13.900 "num_base_bdevs_operational": 3, 00:15:13.900 "process": { 00:15:13.900 "type": "rebuild", 00:15:13.900 "target": "spare", 00:15:13.900 "progress": { 00:15:13.900 "blocks": 20480, 00:15:13.900 "percent": 16 00:15:13.900 } 00:15:13.900 }, 00:15:13.900 "base_bdevs_list": [ 00:15:13.900 { 00:15:13.900 "name": "spare", 00:15:13.900 "uuid": "80d2a6e3-e11e-5939-8221-f68ca702ef7a", 00:15:13.900 "is_configured": true, 00:15:13.900 "data_offset": 2048, 00:15:13.900 "data_size": 63488 00:15:13.900 }, 00:15:13.900 { 00:15:13.900 "name": "BaseBdev2", 00:15:13.900 "uuid": "7eee5e8d-d494-5266-b79c-f51f1d0d8424", 00:15:13.900 "is_configured": true, 00:15:13.900 "data_offset": 2048, 00:15:13.900 "data_size": 63488 00:15:13.900 }, 00:15:13.900 { 00:15:13.900 "name": "BaseBdev3", 00:15:13.900 "uuid": "de858497-606c-5e57-abd0-0a29af0a879c", 00:15:13.900 "is_configured": true, 00:15:13.900 "data_offset": 2048, 00:15:13.900 "data_size": 63488 00:15:13.900 } 00:15:13.900 ] 00:15:13.900 }' 00:15:13.900 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.900 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:13.900 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.900 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.900 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:13.900 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.900 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.900 [2024-12-06 09:52:39.115584] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:14.160 [2024-12-06 09:52:39.184530] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:14.160 [2024-12-06 09:52:39.184586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.160 [2024-12-06 09:52:39.184603] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:14.160 [2024-12-06 09:52:39.184611] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:14.160 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.160 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:14.160 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.160 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.160 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.160 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.160 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:14.160 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.160 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.160 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.160 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.160 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.160 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.160 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.160 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.160 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.160 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.160 "name": "raid_bdev1", 00:15:14.160 "uuid": "67c5e324-732a-45dc-80bc-c846cd1ba126", 00:15:14.160 "strip_size_kb": 64, 00:15:14.160 "state": "online", 00:15:14.160 "raid_level": "raid5f", 00:15:14.160 "superblock": true, 00:15:14.160 "num_base_bdevs": 3, 00:15:14.160 "num_base_bdevs_discovered": 2, 00:15:14.160 "num_base_bdevs_operational": 2, 00:15:14.160 "base_bdevs_list": [ 00:15:14.160 { 00:15:14.160 "name": null, 00:15:14.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.160 "is_configured": false, 00:15:14.160 "data_offset": 0, 00:15:14.160 "data_size": 63488 00:15:14.160 }, 00:15:14.160 { 00:15:14.160 "name": "BaseBdev2", 00:15:14.160 "uuid": "7eee5e8d-d494-5266-b79c-f51f1d0d8424", 00:15:14.160 "is_configured": true, 00:15:14.160 "data_offset": 2048, 00:15:14.160 "data_size": 63488 00:15:14.160 }, 00:15:14.160 { 00:15:14.160 "name": "BaseBdev3", 00:15:14.160 "uuid": "de858497-606c-5e57-abd0-0a29af0a879c", 00:15:14.160 "is_configured": true, 00:15:14.160 "data_offset": 2048, 00:15:14.160 "data_size": 63488 00:15:14.160 } 00:15:14.160 ] 00:15:14.160 }' 00:15:14.160 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.160 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.420 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:14.420 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.420 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:14.420 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:14.420 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.420 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.420 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.420 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.420 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.420 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.680 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.680 "name": "raid_bdev1", 00:15:14.680 "uuid": "67c5e324-732a-45dc-80bc-c846cd1ba126", 00:15:14.680 "strip_size_kb": 64, 00:15:14.680 "state": "online", 00:15:14.680 "raid_level": "raid5f", 00:15:14.680 "superblock": true, 00:15:14.680 "num_base_bdevs": 3, 00:15:14.680 "num_base_bdevs_discovered": 2, 00:15:14.680 "num_base_bdevs_operational": 2, 00:15:14.680 "base_bdevs_list": [ 00:15:14.680 { 00:15:14.680 "name": null, 00:15:14.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.680 "is_configured": false, 00:15:14.680 "data_offset": 0, 00:15:14.680 "data_size": 63488 00:15:14.680 }, 00:15:14.680 { 00:15:14.680 "name": "BaseBdev2", 00:15:14.680 "uuid": "7eee5e8d-d494-5266-b79c-f51f1d0d8424", 00:15:14.680 "is_configured": true, 00:15:14.680 "data_offset": 2048, 00:15:14.680 "data_size": 63488 00:15:14.680 }, 00:15:14.680 { 00:15:14.680 "name": "BaseBdev3", 00:15:14.680 "uuid": "de858497-606c-5e57-abd0-0a29af0a879c", 00:15:14.680 "is_configured": true, 00:15:14.680 "data_offset": 2048, 00:15:14.680 "data_size": 63488 00:15:14.680 } 00:15:14.680 ] 00:15:14.680 }' 00:15:14.680 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.680 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:14.680 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.680 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:14.680 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:14.680 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.680 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.680 [2024-12-06 09:52:39.801573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:14.680 [2024-12-06 09:52:39.817166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:15:14.680 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.680 09:52:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:14.680 [2024-12-06 09:52:39.824194] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:15.620 09:52:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.620 09:52:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.620 09:52:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.620 09:52:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.620 09:52:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.620 09:52:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.620 09:52:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.620 09:52:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.620 09:52:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.620 09:52:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.620 09:52:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.620 "name": "raid_bdev1", 00:15:15.620 "uuid": "67c5e324-732a-45dc-80bc-c846cd1ba126", 00:15:15.620 "strip_size_kb": 64, 00:15:15.620 "state": "online", 00:15:15.620 "raid_level": "raid5f", 00:15:15.620 "superblock": true, 00:15:15.620 "num_base_bdevs": 3, 00:15:15.620 "num_base_bdevs_discovered": 3, 00:15:15.620 "num_base_bdevs_operational": 3, 00:15:15.620 "process": { 00:15:15.620 "type": "rebuild", 00:15:15.620 "target": "spare", 00:15:15.620 "progress": { 00:15:15.620 "blocks": 20480, 00:15:15.620 "percent": 16 00:15:15.620 } 00:15:15.620 }, 00:15:15.620 "base_bdevs_list": [ 00:15:15.620 { 00:15:15.620 "name": "spare", 00:15:15.620 "uuid": "80d2a6e3-e11e-5939-8221-f68ca702ef7a", 00:15:15.620 "is_configured": true, 00:15:15.620 "data_offset": 2048, 00:15:15.620 "data_size": 63488 00:15:15.620 }, 00:15:15.620 { 00:15:15.620 "name": "BaseBdev2", 00:15:15.620 "uuid": "7eee5e8d-d494-5266-b79c-f51f1d0d8424", 00:15:15.620 "is_configured": true, 00:15:15.620 "data_offset": 2048, 00:15:15.620 "data_size": 63488 00:15:15.620 }, 00:15:15.620 { 00:15:15.620 "name": "BaseBdev3", 00:15:15.620 "uuid": "de858497-606c-5e57-abd0-0a29af0a879c", 00:15:15.620 "is_configured": true, 00:15:15.620 "data_offset": 2048, 00:15:15.620 "data_size": 63488 00:15:15.620 } 00:15:15.620 ] 00:15:15.620 }' 00:15:15.620 09:52:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.880 09:52:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:15.880 09:52:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.880 09:52:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.880 09:52:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:15.880 09:52:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:15.880 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:15.880 09:52:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:15.880 09:52:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:15.880 09:52:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=554 00:15:15.880 09:52:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:15.880 09:52:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.880 09:52:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.880 09:52:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.880 09:52:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.880 09:52:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.880 09:52:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.880 09:52:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.880 09:52:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.880 09:52:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.880 09:52:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.880 09:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.880 "name": "raid_bdev1", 00:15:15.880 "uuid": "67c5e324-732a-45dc-80bc-c846cd1ba126", 00:15:15.880 "strip_size_kb": 64, 00:15:15.880 "state": "online", 00:15:15.880 "raid_level": "raid5f", 00:15:15.880 "superblock": true, 00:15:15.880 "num_base_bdevs": 3, 00:15:15.880 "num_base_bdevs_discovered": 3, 00:15:15.880 "num_base_bdevs_operational": 3, 00:15:15.880 "process": { 00:15:15.880 "type": "rebuild", 00:15:15.880 "target": "spare", 00:15:15.880 "progress": { 00:15:15.880 "blocks": 22528, 00:15:15.880 "percent": 17 00:15:15.880 } 00:15:15.880 }, 00:15:15.880 "base_bdevs_list": [ 00:15:15.880 { 00:15:15.880 "name": "spare", 00:15:15.880 "uuid": "80d2a6e3-e11e-5939-8221-f68ca702ef7a", 00:15:15.880 "is_configured": true, 00:15:15.880 "data_offset": 2048, 00:15:15.880 "data_size": 63488 00:15:15.880 }, 00:15:15.880 { 00:15:15.880 "name": "BaseBdev2", 00:15:15.880 "uuid": "7eee5e8d-d494-5266-b79c-f51f1d0d8424", 00:15:15.880 "is_configured": true, 00:15:15.880 "data_offset": 2048, 00:15:15.880 "data_size": 63488 00:15:15.880 }, 00:15:15.880 { 00:15:15.880 "name": "BaseBdev3", 00:15:15.880 "uuid": "de858497-606c-5e57-abd0-0a29af0a879c", 00:15:15.880 "is_configured": true, 00:15:15.880 "data_offset": 2048, 00:15:15.880 "data_size": 63488 00:15:15.880 } 00:15:15.880 ] 00:15:15.880 }' 00:15:15.880 09:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.880 09:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:15.880 09:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.880 09:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.880 09:52:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:17.259 09:52:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:17.259 09:52:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.259 09:52:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.259 09:52:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.259 09:52:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.259 09:52:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.259 09:52:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.259 09:52:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.259 09:52:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.259 09:52:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.259 09:52:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.259 09:52:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.259 "name": "raid_bdev1", 00:15:17.259 "uuid": "67c5e324-732a-45dc-80bc-c846cd1ba126", 00:15:17.259 "strip_size_kb": 64, 00:15:17.259 "state": "online", 00:15:17.259 "raid_level": "raid5f", 00:15:17.259 "superblock": true, 00:15:17.259 "num_base_bdevs": 3, 00:15:17.259 "num_base_bdevs_discovered": 3, 00:15:17.259 "num_base_bdevs_operational": 3, 00:15:17.259 "process": { 00:15:17.259 "type": "rebuild", 00:15:17.259 "target": "spare", 00:15:17.259 "progress": { 00:15:17.259 "blocks": 45056, 00:15:17.259 "percent": 35 00:15:17.259 } 00:15:17.259 }, 00:15:17.259 "base_bdevs_list": [ 00:15:17.259 { 00:15:17.259 "name": "spare", 00:15:17.259 "uuid": "80d2a6e3-e11e-5939-8221-f68ca702ef7a", 00:15:17.259 "is_configured": true, 00:15:17.259 "data_offset": 2048, 00:15:17.259 "data_size": 63488 00:15:17.259 }, 00:15:17.259 { 00:15:17.259 "name": "BaseBdev2", 00:15:17.259 "uuid": "7eee5e8d-d494-5266-b79c-f51f1d0d8424", 00:15:17.259 "is_configured": true, 00:15:17.260 "data_offset": 2048, 00:15:17.260 "data_size": 63488 00:15:17.260 }, 00:15:17.260 { 00:15:17.260 "name": "BaseBdev3", 00:15:17.260 "uuid": "de858497-606c-5e57-abd0-0a29af0a879c", 00:15:17.260 "is_configured": true, 00:15:17.260 "data_offset": 2048, 00:15:17.260 "data_size": 63488 00:15:17.260 } 00:15:17.260 ] 00:15:17.260 }' 00:15:17.260 09:52:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.260 09:52:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:17.260 09:52:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.260 09:52:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:17.260 09:52:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:18.199 09:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:18.199 09:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:18.199 09:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.199 09:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:18.199 09:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:18.199 09:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.199 09:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.199 09:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.199 09:52:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.199 09:52:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.199 09:52:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.199 09:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.199 "name": "raid_bdev1", 00:15:18.199 "uuid": "67c5e324-732a-45dc-80bc-c846cd1ba126", 00:15:18.199 "strip_size_kb": 64, 00:15:18.199 "state": "online", 00:15:18.199 "raid_level": "raid5f", 00:15:18.199 "superblock": true, 00:15:18.199 "num_base_bdevs": 3, 00:15:18.199 "num_base_bdevs_discovered": 3, 00:15:18.199 "num_base_bdevs_operational": 3, 00:15:18.199 "process": { 00:15:18.199 "type": "rebuild", 00:15:18.199 "target": "spare", 00:15:18.199 "progress": { 00:15:18.199 "blocks": 69632, 00:15:18.199 "percent": 54 00:15:18.199 } 00:15:18.199 }, 00:15:18.199 "base_bdevs_list": [ 00:15:18.199 { 00:15:18.199 "name": "spare", 00:15:18.199 "uuid": "80d2a6e3-e11e-5939-8221-f68ca702ef7a", 00:15:18.199 "is_configured": true, 00:15:18.199 "data_offset": 2048, 00:15:18.199 "data_size": 63488 00:15:18.199 }, 00:15:18.199 { 00:15:18.200 "name": "BaseBdev2", 00:15:18.200 "uuid": "7eee5e8d-d494-5266-b79c-f51f1d0d8424", 00:15:18.200 "is_configured": true, 00:15:18.200 "data_offset": 2048, 00:15:18.200 "data_size": 63488 00:15:18.200 }, 00:15:18.200 { 00:15:18.200 "name": "BaseBdev3", 00:15:18.200 "uuid": "de858497-606c-5e57-abd0-0a29af0a879c", 00:15:18.200 "is_configured": true, 00:15:18.200 "data_offset": 2048, 00:15:18.200 "data_size": 63488 00:15:18.200 } 00:15:18.200 ] 00:15:18.200 }' 00:15:18.200 09:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.200 09:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:18.200 09:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.200 09:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:18.200 09:52:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:19.138 09:52:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:19.138 09:52:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:19.138 09:52:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.138 09:52:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:19.138 09:52:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:19.138 09:52:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.138 09:52:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.138 09:52:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.138 09:52:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.138 09:52:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.397 09:52:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.397 09:52:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.397 "name": "raid_bdev1", 00:15:19.397 "uuid": "67c5e324-732a-45dc-80bc-c846cd1ba126", 00:15:19.397 "strip_size_kb": 64, 00:15:19.397 "state": "online", 00:15:19.397 "raid_level": "raid5f", 00:15:19.397 "superblock": true, 00:15:19.397 "num_base_bdevs": 3, 00:15:19.397 "num_base_bdevs_discovered": 3, 00:15:19.397 "num_base_bdevs_operational": 3, 00:15:19.397 "process": { 00:15:19.397 "type": "rebuild", 00:15:19.397 "target": "spare", 00:15:19.397 "progress": { 00:15:19.397 "blocks": 92160, 00:15:19.397 "percent": 72 00:15:19.397 } 00:15:19.397 }, 00:15:19.397 "base_bdevs_list": [ 00:15:19.397 { 00:15:19.397 "name": "spare", 00:15:19.397 "uuid": "80d2a6e3-e11e-5939-8221-f68ca702ef7a", 00:15:19.397 "is_configured": true, 00:15:19.397 "data_offset": 2048, 00:15:19.397 "data_size": 63488 00:15:19.397 }, 00:15:19.397 { 00:15:19.397 "name": "BaseBdev2", 00:15:19.397 "uuid": "7eee5e8d-d494-5266-b79c-f51f1d0d8424", 00:15:19.397 "is_configured": true, 00:15:19.397 "data_offset": 2048, 00:15:19.397 "data_size": 63488 00:15:19.397 }, 00:15:19.397 { 00:15:19.397 "name": "BaseBdev3", 00:15:19.397 "uuid": "de858497-606c-5e57-abd0-0a29af0a879c", 00:15:19.397 "is_configured": true, 00:15:19.397 "data_offset": 2048, 00:15:19.397 "data_size": 63488 00:15:19.397 } 00:15:19.397 ] 00:15:19.397 }' 00:15:19.397 09:52:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.397 09:52:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:19.397 09:52:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.397 09:52:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:19.397 09:52:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:20.335 09:52:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:20.335 09:52:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:20.335 09:52:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.335 09:52:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:20.335 09:52:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:20.336 09:52:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.336 09:52:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.336 09:52:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.336 09:52:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.336 09:52:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.336 09:52:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.594 09:52:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.594 "name": "raid_bdev1", 00:15:20.594 "uuid": "67c5e324-732a-45dc-80bc-c846cd1ba126", 00:15:20.594 "strip_size_kb": 64, 00:15:20.594 "state": "online", 00:15:20.594 "raid_level": "raid5f", 00:15:20.594 "superblock": true, 00:15:20.594 "num_base_bdevs": 3, 00:15:20.594 "num_base_bdevs_discovered": 3, 00:15:20.594 "num_base_bdevs_operational": 3, 00:15:20.594 "process": { 00:15:20.594 "type": "rebuild", 00:15:20.594 "target": "spare", 00:15:20.594 "progress": { 00:15:20.594 "blocks": 116736, 00:15:20.594 "percent": 91 00:15:20.594 } 00:15:20.594 }, 00:15:20.594 "base_bdevs_list": [ 00:15:20.594 { 00:15:20.594 "name": "spare", 00:15:20.594 "uuid": "80d2a6e3-e11e-5939-8221-f68ca702ef7a", 00:15:20.594 "is_configured": true, 00:15:20.594 "data_offset": 2048, 00:15:20.594 "data_size": 63488 00:15:20.594 }, 00:15:20.594 { 00:15:20.594 "name": "BaseBdev2", 00:15:20.594 "uuid": "7eee5e8d-d494-5266-b79c-f51f1d0d8424", 00:15:20.594 "is_configured": true, 00:15:20.594 "data_offset": 2048, 00:15:20.594 "data_size": 63488 00:15:20.594 }, 00:15:20.594 { 00:15:20.594 "name": "BaseBdev3", 00:15:20.594 "uuid": "de858497-606c-5e57-abd0-0a29af0a879c", 00:15:20.594 "is_configured": true, 00:15:20.594 "data_offset": 2048, 00:15:20.594 "data_size": 63488 00:15:20.594 } 00:15:20.594 ] 00:15:20.594 }' 00:15:20.594 09:52:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.594 09:52:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:20.594 09:52:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.594 09:52:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:20.594 09:52:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:20.852 [2024-12-06 09:52:46.065046] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:20.852 [2024-12-06 09:52:46.065209] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:20.853 [2024-12-06 09:52:46.065379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.788 09:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:21.788 09:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:21.788 09:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.788 09:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:21.788 09:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:21.788 09:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.788 09:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.788 09:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.788 09:52:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.788 09:52:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.788 09:52:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.788 09:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.788 "name": "raid_bdev1", 00:15:21.788 "uuid": "67c5e324-732a-45dc-80bc-c846cd1ba126", 00:15:21.788 "strip_size_kb": 64, 00:15:21.788 "state": "online", 00:15:21.788 "raid_level": "raid5f", 00:15:21.788 "superblock": true, 00:15:21.788 "num_base_bdevs": 3, 00:15:21.788 "num_base_bdevs_discovered": 3, 00:15:21.788 "num_base_bdevs_operational": 3, 00:15:21.788 "base_bdevs_list": [ 00:15:21.788 { 00:15:21.788 "name": "spare", 00:15:21.788 "uuid": "80d2a6e3-e11e-5939-8221-f68ca702ef7a", 00:15:21.788 "is_configured": true, 00:15:21.788 "data_offset": 2048, 00:15:21.788 "data_size": 63488 00:15:21.788 }, 00:15:21.788 { 00:15:21.788 "name": "BaseBdev2", 00:15:21.788 "uuid": "7eee5e8d-d494-5266-b79c-f51f1d0d8424", 00:15:21.788 "is_configured": true, 00:15:21.788 "data_offset": 2048, 00:15:21.788 "data_size": 63488 00:15:21.788 }, 00:15:21.788 { 00:15:21.788 "name": "BaseBdev3", 00:15:21.788 "uuid": "de858497-606c-5e57-abd0-0a29af0a879c", 00:15:21.788 "is_configured": true, 00:15:21.788 "data_offset": 2048, 00:15:21.788 "data_size": 63488 00:15:21.788 } 00:15:21.788 ] 00:15:21.788 }' 00:15:21.788 09:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.788 09:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:21.788 09:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.788 09:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:21.788 09:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:21.788 09:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:21.788 09:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.788 09:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:21.788 09:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:21.788 09:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.788 09:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.788 09:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.788 09:52:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.788 09:52:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.788 09:52:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.788 09:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.788 "name": "raid_bdev1", 00:15:21.788 "uuid": "67c5e324-732a-45dc-80bc-c846cd1ba126", 00:15:21.788 "strip_size_kb": 64, 00:15:21.788 "state": "online", 00:15:21.788 "raid_level": "raid5f", 00:15:21.788 "superblock": true, 00:15:21.788 "num_base_bdevs": 3, 00:15:21.788 "num_base_bdevs_discovered": 3, 00:15:21.788 "num_base_bdevs_operational": 3, 00:15:21.788 "base_bdevs_list": [ 00:15:21.788 { 00:15:21.788 "name": "spare", 00:15:21.788 "uuid": "80d2a6e3-e11e-5939-8221-f68ca702ef7a", 00:15:21.788 "is_configured": true, 00:15:21.788 "data_offset": 2048, 00:15:21.788 "data_size": 63488 00:15:21.788 }, 00:15:21.788 { 00:15:21.788 "name": "BaseBdev2", 00:15:21.788 "uuid": "7eee5e8d-d494-5266-b79c-f51f1d0d8424", 00:15:21.788 "is_configured": true, 00:15:21.788 "data_offset": 2048, 00:15:21.788 "data_size": 63488 00:15:21.788 }, 00:15:21.788 { 00:15:21.788 "name": "BaseBdev3", 00:15:21.788 "uuid": "de858497-606c-5e57-abd0-0a29af0a879c", 00:15:21.788 "is_configured": true, 00:15:21.788 "data_offset": 2048, 00:15:21.788 "data_size": 63488 00:15:21.788 } 00:15:21.788 ] 00:15:21.788 }' 00:15:21.788 09:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.788 09:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:21.788 09:52:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.788 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:21.788 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:21.789 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.789 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.789 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.789 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.789 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:21.789 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.789 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.789 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.789 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.789 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.789 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.789 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.789 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.789 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.047 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.047 "name": "raid_bdev1", 00:15:22.047 "uuid": "67c5e324-732a-45dc-80bc-c846cd1ba126", 00:15:22.047 "strip_size_kb": 64, 00:15:22.047 "state": "online", 00:15:22.047 "raid_level": "raid5f", 00:15:22.047 "superblock": true, 00:15:22.047 "num_base_bdevs": 3, 00:15:22.047 "num_base_bdevs_discovered": 3, 00:15:22.047 "num_base_bdevs_operational": 3, 00:15:22.047 "base_bdevs_list": [ 00:15:22.047 { 00:15:22.047 "name": "spare", 00:15:22.047 "uuid": "80d2a6e3-e11e-5939-8221-f68ca702ef7a", 00:15:22.047 "is_configured": true, 00:15:22.047 "data_offset": 2048, 00:15:22.047 "data_size": 63488 00:15:22.047 }, 00:15:22.047 { 00:15:22.047 "name": "BaseBdev2", 00:15:22.047 "uuid": "7eee5e8d-d494-5266-b79c-f51f1d0d8424", 00:15:22.047 "is_configured": true, 00:15:22.047 "data_offset": 2048, 00:15:22.047 "data_size": 63488 00:15:22.047 }, 00:15:22.047 { 00:15:22.047 "name": "BaseBdev3", 00:15:22.047 "uuid": "de858497-606c-5e57-abd0-0a29af0a879c", 00:15:22.047 "is_configured": true, 00:15:22.047 "data_offset": 2048, 00:15:22.047 "data_size": 63488 00:15:22.047 } 00:15:22.047 ] 00:15:22.047 }' 00:15:22.047 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.047 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.307 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:22.307 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.307 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.307 [2024-12-06 09:52:47.526479] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:22.307 [2024-12-06 09:52:47.526514] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:22.307 [2024-12-06 09:52:47.526607] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:22.307 [2024-12-06 09:52:47.526694] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:22.307 [2024-12-06 09:52:47.526710] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:22.307 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.307 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.307 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.307 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.307 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:22.307 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.566 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:22.566 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:22.566 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:22.566 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:22.566 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:22.566 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:22.566 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:22.566 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:22.566 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:22.566 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:22.566 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:22.566 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:22.566 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:22.566 /dev/nbd0 00:15:22.566 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:22.566 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:22.566 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:22.566 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:22.566 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:22.566 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:22.566 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:22.566 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:22.566 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:22.566 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:22.566 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:22.566 1+0 records in 00:15:22.566 1+0 records out 00:15:22.566 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529204 s, 7.7 MB/s 00:15:22.566 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.566 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:22.566 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.566 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:22.566 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:22.566 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:22.566 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:22.566 09:52:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:22.826 /dev/nbd1 00:15:22.826 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:22.826 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:22.826 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:22.826 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:22.826 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:22.826 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:22.826 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:22.826 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:22.826 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:22.826 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:22.826 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:22.826 1+0 records in 00:15:22.826 1+0 records out 00:15:22.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353186 s, 11.6 MB/s 00:15:22.826 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.826 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:22.826 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.826 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:22.826 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:22.826 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:22.826 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:22.826 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:23.086 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:23.086 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:23.086 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:23.086 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:23.086 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:23.086 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:23.086 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:23.353 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:23.353 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:23.353 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:23.353 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:23.353 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:23.353 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:23.353 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:23.353 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:23.353 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:23.353 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:23.627 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:23.627 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:23.627 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:23.627 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:23.627 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:23.627 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:23.627 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:23.627 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:23.627 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:23.627 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:23.627 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.627 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.627 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.627 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:23.627 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.627 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.627 [2024-12-06 09:52:48.701245] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:23.627 [2024-12-06 09:52:48.701303] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.627 [2024-12-06 09:52:48.701323] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:23.627 [2024-12-06 09:52:48.701334] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.627 [2024-12-06 09:52:48.703527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.627 [2024-12-06 09:52:48.703568] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:23.628 [2024-12-06 09:52:48.703656] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:23.628 [2024-12-06 09:52:48.703712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:23.628 [2024-12-06 09:52:48.703856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:23.628 [2024-12-06 09:52:48.703977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:23.628 spare 00:15:23.628 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.628 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:23.628 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.628 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.628 [2024-12-06 09:52:48.803900] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:23.628 [2024-12-06 09:52:48.803930] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:23.628 [2024-12-06 09:52:48.804218] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:15:23.628 [2024-12-06 09:52:48.809703] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:23.628 [2024-12-06 09:52:48.809769] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:23.628 [2024-12-06 09:52:48.809966] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.628 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.628 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:23.628 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.628 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.628 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:23.628 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.628 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:23.628 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.628 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.628 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.628 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.628 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.628 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.628 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.628 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.628 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.628 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.628 "name": "raid_bdev1", 00:15:23.628 "uuid": "67c5e324-732a-45dc-80bc-c846cd1ba126", 00:15:23.628 "strip_size_kb": 64, 00:15:23.628 "state": "online", 00:15:23.628 "raid_level": "raid5f", 00:15:23.628 "superblock": true, 00:15:23.628 "num_base_bdevs": 3, 00:15:23.628 "num_base_bdevs_discovered": 3, 00:15:23.628 "num_base_bdevs_operational": 3, 00:15:23.628 "base_bdevs_list": [ 00:15:23.628 { 00:15:23.628 "name": "spare", 00:15:23.628 "uuid": "80d2a6e3-e11e-5939-8221-f68ca702ef7a", 00:15:23.628 "is_configured": true, 00:15:23.628 "data_offset": 2048, 00:15:23.628 "data_size": 63488 00:15:23.628 }, 00:15:23.628 { 00:15:23.628 "name": "BaseBdev2", 00:15:23.628 "uuid": "7eee5e8d-d494-5266-b79c-f51f1d0d8424", 00:15:23.628 "is_configured": true, 00:15:23.628 "data_offset": 2048, 00:15:23.628 "data_size": 63488 00:15:23.628 }, 00:15:23.628 { 00:15:23.628 "name": "BaseBdev3", 00:15:23.628 "uuid": "de858497-606c-5e57-abd0-0a29af0a879c", 00:15:23.628 "is_configured": true, 00:15:23.628 "data_offset": 2048, 00:15:23.628 "data_size": 63488 00:15:23.628 } 00:15:23.628 ] 00:15:23.628 }' 00:15:23.628 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.628 09:52:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.196 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:24.196 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.196 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:24.196 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:24.196 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.196 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.196 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.196 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.196 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.196 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.196 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.196 "name": "raid_bdev1", 00:15:24.196 "uuid": "67c5e324-732a-45dc-80bc-c846cd1ba126", 00:15:24.196 "strip_size_kb": 64, 00:15:24.196 "state": "online", 00:15:24.196 "raid_level": "raid5f", 00:15:24.196 "superblock": true, 00:15:24.196 "num_base_bdevs": 3, 00:15:24.196 "num_base_bdevs_discovered": 3, 00:15:24.196 "num_base_bdevs_operational": 3, 00:15:24.196 "base_bdevs_list": [ 00:15:24.196 { 00:15:24.196 "name": "spare", 00:15:24.196 "uuid": "80d2a6e3-e11e-5939-8221-f68ca702ef7a", 00:15:24.196 "is_configured": true, 00:15:24.196 "data_offset": 2048, 00:15:24.196 "data_size": 63488 00:15:24.196 }, 00:15:24.196 { 00:15:24.196 "name": "BaseBdev2", 00:15:24.196 "uuid": "7eee5e8d-d494-5266-b79c-f51f1d0d8424", 00:15:24.196 "is_configured": true, 00:15:24.196 "data_offset": 2048, 00:15:24.196 "data_size": 63488 00:15:24.196 }, 00:15:24.196 { 00:15:24.196 "name": "BaseBdev3", 00:15:24.196 "uuid": "de858497-606c-5e57-abd0-0a29af0a879c", 00:15:24.196 "is_configured": true, 00:15:24.196 "data_offset": 2048, 00:15:24.196 "data_size": 63488 00:15:24.196 } 00:15:24.196 ] 00:15:24.196 }' 00:15:24.196 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.196 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:24.196 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.197 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:24.197 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:24.197 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.197 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.197 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.197 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.197 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:24.197 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:24.197 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.197 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.197 [2024-12-06 09:52:49.467372] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:24.456 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.456 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:24.456 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.456 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.456 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.456 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.456 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:24.456 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.456 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.456 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.456 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.456 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.456 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.456 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.456 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.456 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.456 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.456 "name": "raid_bdev1", 00:15:24.456 "uuid": "67c5e324-732a-45dc-80bc-c846cd1ba126", 00:15:24.456 "strip_size_kb": 64, 00:15:24.456 "state": "online", 00:15:24.456 "raid_level": "raid5f", 00:15:24.456 "superblock": true, 00:15:24.456 "num_base_bdevs": 3, 00:15:24.456 "num_base_bdevs_discovered": 2, 00:15:24.456 "num_base_bdevs_operational": 2, 00:15:24.456 "base_bdevs_list": [ 00:15:24.456 { 00:15:24.456 "name": null, 00:15:24.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.456 "is_configured": false, 00:15:24.456 "data_offset": 0, 00:15:24.456 "data_size": 63488 00:15:24.456 }, 00:15:24.456 { 00:15:24.456 "name": "BaseBdev2", 00:15:24.456 "uuid": "7eee5e8d-d494-5266-b79c-f51f1d0d8424", 00:15:24.456 "is_configured": true, 00:15:24.456 "data_offset": 2048, 00:15:24.456 "data_size": 63488 00:15:24.456 }, 00:15:24.456 { 00:15:24.456 "name": "BaseBdev3", 00:15:24.456 "uuid": "de858497-606c-5e57-abd0-0a29af0a879c", 00:15:24.456 "is_configured": true, 00:15:24.456 "data_offset": 2048, 00:15:24.456 "data_size": 63488 00:15:24.456 } 00:15:24.456 ] 00:15:24.456 }' 00:15:24.456 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.456 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.715 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:24.715 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.715 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.715 [2024-12-06 09:52:49.934604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:24.715 [2024-12-06 09:52:49.934863] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:24.715 [2024-12-06 09:52:49.934926] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:24.715 [2024-12-06 09:52:49.934989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:24.715 [2024-12-06 09:52:49.950304] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:15:24.715 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.715 09:52:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:24.715 [2024-12-06 09:52:49.957192] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:26.095 09:52:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:26.095 09:52:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.095 09:52:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:26.095 09:52:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:26.095 09:52:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.095 09:52:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.096 09:52:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.096 09:52:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.096 09:52:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.096 09:52:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.096 09:52:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.096 "name": "raid_bdev1", 00:15:26.096 "uuid": "67c5e324-732a-45dc-80bc-c846cd1ba126", 00:15:26.096 "strip_size_kb": 64, 00:15:26.096 "state": "online", 00:15:26.096 "raid_level": "raid5f", 00:15:26.096 "superblock": true, 00:15:26.096 "num_base_bdevs": 3, 00:15:26.096 "num_base_bdevs_discovered": 3, 00:15:26.096 "num_base_bdevs_operational": 3, 00:15:26.096 "process": { 00:15:26.096 "type": "rebuild", 00:15:26.096 "target": "spare", 00:15:26.096 "progress": { 00:15:26.096 "blocks": 20480, 00:15:26.096 "percent": 16 00:15:26.096 } 00:15:26.096 }, 00:15:26.096 "base_bdevs_list": [ 00:15:26.096 { 00:15:26.096 "name": "spare", 00:15:26.096 "uuid": "80d2a6e3-e11e-5939-8221-f68ca702ef7a", 00:15:26.096 "is_configured": true, 00:15:26.096 "data_offset": 2048, 00:15:26.096 "data_size": 63488 00:15:26.096 }, 00:15:26.096 { 00:15:26.096 "name": "BaseBdev2", 00:15:26.096 "uuid": "7eee5e8d-d494-5266-b79c-f51f1d0d8424", 00:15:26.096 "is_configured": true, 00:15:26.096 "data_offset": 2048, 00:15:26.096 "data_size": 63488 00:15:26.096 }, 00:15:26.096 { 00:15:26.096 "name": "BaseBdev3", 00:15:26.096 "uuid": "de858497-606c-5e57-abd0-0a29af0a879c", 00:15:26.096 "is_configured": true, 00:15:26.096 "data_offset": 2048, 00:15:26.096 "data_size": 63488 00:15:26.096 } 00:15:26.096 ] 00:15:26.096 }' 00:15:26.096 09:52:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.096 09:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:26.096 09:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.096 09:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:26.096 09:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:26.096 09:52:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.096 09:52:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.096 [2024-12-06 09:52:51.088026] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:26.096 [2024-12-06 09:52:51.165590] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:26.096 [2024-12-06 09:52:51.165650] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.096 [2024-12-06 09:52:51.165665] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:26.096 [2024-12-06 09:52:51.165674] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:26.096 09:52:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.096 09:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:26.096 09:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.096 09:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.096 09:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.096 09:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.096 09:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:26.096 09:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.096 09:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.096 09:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.096 09:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.096 09:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.096 09:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.096 09:52:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.096 09:52:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.096 09:52:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.096 09:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.096 "name": "raid_bdev1", 00:15:26.096 "uuid": "67c5e324-732a-45dc-80bc-c846cd1ba126", 00:15:26.096 "strip_size_kb": 64, 00:15:26.096 "state": "online", 00:15:26.096 "raid_level": "raid5f", 00:15:26.096 "superblock": true, 00:15:26.096 "num_base_bdevs": 3, 00:15:26.096 "num_base_bdevs_discovered": 2, 00:15:26.096 "num_base_bdevs_operational": 2, 00:15:26.096 "base_bdevs_list": [ 00:15:26.096 { 00:15:26.096 "name": null, 00:15:26.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.096 "is_configured": false, 00:15:26.096 "data_offset": 0, 00:15:26.096 "data_size": 63488 00:15:26.096 }, 00:15:26.096 { 00:15:26.096 "name": "BaseBdev2", 00:15:26.096 "uuid": "7eee5e8d-d494-5266-b79c-f51f1d0d8424", 00:15:26.096 "is_configured": true, 00:15:26.096 "data_offset": 2048, 00:15:26.096 "data_size": 63488 00:15:26.096 }, 00:15:26.096 { 00:15:26.096 "name": "BaseBdev3", 00:15:26.096 "uuid": "de858497-606c-5e57-abd0-0a29af0a879c", 00:15:26.096 "is_configured": true, 00:15:26.096 "data_offset": 2048, 00:15:26.096 "data_size": 63488 00:15:26.096 } 00:15:26.096 ] 00:15:26.096 }' 00:15:26.096 09:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.096 09:52:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.666 09:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:26.666 09:52:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.666 09:52:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.666 [2024-12-06 09:52:51.651067] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:26.666 [2024-12-06 09:52:51.651181] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.666 [2024-12-06 09:52:51.651221] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:15:26.666 [2024-12-06 09:52:51.651254] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.666 [2024-12-06 09:52:51.651783] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.666 [2024-12-06 09:52:51.651847] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:26.666 [2024-12-06 09:52:51.651980] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:26.666 [2024-12-06 09:52:51.652024] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:26.666 [2024-12-06 09:52:51.652067] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:26.666 [2024-12-06 09:52:51.652116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:26.666 [2024-12-06 09:52:51.667074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:15:26.666 spare 00:15:26.666 09:52:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.666 09:52:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:26.666 [2024-12-06 09:52:51.674016] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:27.605 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.605 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.605 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.605 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.605 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.605 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.605 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.605 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.605 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.605 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.605 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.605 "name": "raid_bdev1", 00:15:27.605 "uuid": "67c5e324-732a-45dc-80bc-c846cd1ba126", 00:15:27.605 "strip_size_kb": 64, 00:15:27.605 "state": "online", 00:15:27.605 "raid_level": "raid5f", 00:15:27.605 "superblock": true, 00:15:27.605 "num_base_bdevs": 3, 00:15:27.605 "num_base_bdevs_discovered": 3, 00:15:27.605 "num_base_bdevs_operational": 3, 00:15:27.605 "process": { 00:15:27.605 "type": "rebuild", 00:15:27.605 "target": "spare", 00:15:27.605 "progress": { 00:15:27.605 "blocks": 20480, 00:15:27.605 "percent": 16 00:15:27.605 } 00:15:27.605 }, 00:15:27.605 "base_bdevs_list": [ 00:15:27.605 { 00:15:27.605 "name": "spare", 00:15:27.605 "uuid": "80d2a6e3-e11e-5939-8221-f68ca702ef7a", 00:15:27.605 "is_configured": true, 00:15:27.605 "data_offset": 2048, 00:15:27.605 "data_size": 63488 00:15:27.605 }, 00:15:27.605 { 00:15:27.605 "name": "BaseBdev2", 00:15:27.605 "uuid": "7eee5e8d-d494-5266-b79c-f51f1d0d8424", 00:15:27.605 "is_configured": true, 00:15:27.605 "data_offset": 2048, 00:15:27.605 "data_size": 63488 00:15:27.605 }, 00:15:27.605 { 00:15:27.605 "name": "BaseBdev3", 00:15:27.605 "uuid": "de858497-606c-5e57-abd0-0a29af0a879c", 00:15:27.605 "is_configured": true, 00:15:27.605 "data_offset": 2048, 00:15:27.605 "data_size": 63488 00:15:27.605 } 00:15:27.605 ] 00:15:27.605 }' 00:15:27.605 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.605 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:27.605 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.605 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:27.605 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:27.605 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.605 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.605 [2024-12-06 09:52:52.816845] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:27.863 [2024-12-06 09:52:52.882411] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:27.863 [2024-12-06 09:52:52.882464] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.863 [2024-12-06 09:52:52.882498] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:27.863 [2024-12-06 09:52:52.882505] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:27.863 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.863 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:27.863 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.863 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.863 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.863 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.863 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:27.863 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.863 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.863 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.863 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.863 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.863 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.863 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.863 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.863 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.863 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.863 "name": "raid_bdev1", 00:15:27.863 "uuid": "67c5e324-732a-45dc-80bc-c846cd1ba126", 00:15:27.863 "strip_size_kb": 64, 00:15:27.863 "state": "online", 00:15:27.863 "raid_level": "raid5f", 00:15:27.863 "superblock": true, 00:15:27.863 "num_base_bdevs": 3, 00:15:27.863 "num_base_bdevs_discovered": 2, 00:15:27.863 "num_base_bdevs_operational": 2, 00:15:27.863 "base_bdevs_list": [ 00:15:27.863 { 00:15:27.863 "name": null, 00:15:27.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.863 "is_configured": false, 00:15:27.863 "data_offset": 0, 00:15:27.863 "data_size": 63488 00:15:27.863 }, 00:15:27.863 { 00:15:27.863 "name": "BaseBdev2", 00:15:27.863 "uuid": "7eee5e8d-d494-5266-b79c-f51f1d0d8424", 00:15:27.863 "is_configured": true, 00:15:27.863 "data_offset": 2048, 00:15:27.863 "data_size": 63488 00:15:27.863 }, 00:15:27.863 { 00:15:27.864 "name": "BaseBdev3", 00:15:27.864 "uuid": "de858497-606c-5e57-abd0-0a29af0a879c", 00:15:27.864 "is_configured": true, 00:15:27.864 "data_offset": 2048, 00:15:27.864 "data_size": 63488 00:15:27.864 } 00:15:27.864 ] 00:15:27.864 }' 00:15:27.864 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.864 09:52:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.123 09:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:28.123 09:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.123 09:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:28.123 09:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:28.123 09:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.123 09:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.123 09:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.123 09:52:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.123 09:52:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.123 09:52:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.123 09:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.123 "name": "raid_bdev1", 00:15:28.123 "uuid": "67c5e324-732a-45dc-80bc-c846cd1ba126", 00:15:28.123 "strip_size_kb": 64, 00:15:28.123 "state": "online", 00:15:28.123 "raid_level": "raid5f", 00:15:28.123 "superblock": true, 00:15:28.123 "num_base_bdevs": 3, 00:15:28.123 "num_base_bdevs_discovered": 2, 00:15:28.123 "num_base_bdevs_operational": 2, 00:15:28.123 "base_bdevs_list": [ 00:15:28.123 { 00:15:28.123 "name": null, 00:15:28.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.123 "is_configured": false, 00:15:28.123 "data_offset": 0, 00:15:28.123 "data_size": 63488 00:15:28.123 }, 00:15:28.123 { 00:15:28.123 "name": "BaseBdev2", 00:15:28.123 "uuid": "7eee5e8d-d494-5266-b79c-f51f1d0d8424", 00:15:28.123 "is_configured": true, 00:15:28.123 "data_offset": 2048, 00:15:28.123 "data_size": 63488 00:15:28.123 }, 00:15:28.123 { 00:15:28.123 "name": "BaseBdev3", 00:15:28.123 "uuid": "de858497-606c-5e57-abd0-0a29af0a879c", 00:15:28.123 "is_configured": true, 00:15:28.123 "data_offset": 2048, 00:15:28.123 "data_size": 63488 00:15:28.123 } 00:15:28.123 ] 00:15:28.123 }' 00:15:28.123 09:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.383 09:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:28.383 09:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.383 09:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:28.383 09:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:28.383 09:52:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.383 09:52:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.383 09:52:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.383 09:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:28.383 09:52:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.383 09:52:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.383 [2024-12-06 09:52:53.475475] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:28.383 [2024-12-06 09:52:53.475541] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.383 [2024-12-06 09:52:53.475570] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:28.383 [2024-12-06 09:52:53.475579] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.383 [2024-12-06 09:52:53.476046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.383 [2024-12-06 09:52:53.476063] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:28.383 [2024-12-06 09:52:53.476144] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:28.383 [2024-12-06 09:52:53.476173] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:28.383 [2024-12-06 09:52:53.476193] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:28.383 [2024-12-06 09:52:53.476203] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:28.383 BaseBdev1 00:15:28.383 09:52:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.383 09:52:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:29.323 09:52:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:29.323 09:52:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.323 09:52:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.323 09:52:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.323 09:52:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.323 09:52:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:29.323 09:52:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.323 09:52:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.323 09:52:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.323 09:52:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.323 09:52:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.323 09:52:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.323 09:52:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.323 09:52:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.323 09:52:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.323 09:52:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.323 "name": "raid_bdev1", 00:15:29.323 "uuid": "67c5e324-732a-45dc-80bc-c846cd1ba126", 00:15:29.323 "strip_size_kb": 64, 00:15:29.323 "state": "online", 00:15:29.323 "raid_level": "raid5f", 00:15:29.323 "superblock": true, 00:15:29.323 "num_base_bdevs": 3, 00:15:29.323 "num_base_bdevs_discovered": 2, 00:15:29.323 "num_base_bdevs_operational": 2, 00:15:29.323 "base_bdevs_list": [ 00:15:29.323 { 00:15:29.323 "name": null, 00:15:29.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.323 "is_configured": false, 00:15:29.323 "data_offset": 0, 00:15:29.323 "data_size": 63488 00:15:29.323 }, 00:15:29.323 { 00:15:29.323 "name": "BaseBdev2", 00:15:29.323 "uuid": "7eee5e8d-d494-5266-b79c-f51f1d0d8424", 00:15:29.323 "is_configured": true, 00:15:29.323 "data_offset": 2048, 00:15:29.323 "data_size": 63488 00:15:29.323 }, 00:15:29.323 { 00:15:29.323 "name": "BaseBdev3", 00:15:29.323 "uuid": "de858497-606c-5e57-abd0-0a29af0a879c", 00:15:29.323 "is_configured": true, 00:15:29.323 "data_offset": 2048, 00:15:29.323 "data_size": 63488 00:15:29.323 } 00:15:29.323 ] 00:15:29.323 }' 00:15:29.323 09:52:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.323 09:52:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.895 09:52:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:29.895 09:52:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.895 09:52:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:29.895 09:52:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:29.895 09:52:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.895 09:52:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.895 09:52:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.895 09:52:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.895 09:52:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.895 09:52:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.895 09:52:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.895 "name": "raid_bdev1", 00:15:29.895 "uuid": "67c5e324-732a-45dc-80bc-c846cd1ba126", 00:15:29.895 "strip_size_kb": 64, 00:15:29.895 "state": "online", 00:15:29.895 "raid_level": "raid5f", 00:15:29.895 "superblock": true, 00:15:29.895 "num_base_bdevs": 3, 00:15:29.895 "num_base_bdevs_discovered": 2, 00:15:29.895 "num_base_bdevs_operational": 2, 00:15:29.895 "base_bdevs_list": [ 00:15:29.895 { 00:15:29.895 "name": null, 00:15:29.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.895 "is_configured": false, 00:15:29.895 "data_offset": 0, 00:15:29.896 "data_size": 63488 00:15:29.896 }, 00:15:29.896 { 00:15:29.896 "name": "BaseBdev2", 00:15:29.896 "uuid": "7eee5e8d-d494-5266-b79c-f51f1d0d8424", 00:15:29.896 "is_configured": true, 00:15:29.896 "data_offset": 2048, 00:15:29.896 "data_size": 63488 00:15:29.896 }, 00:15:29.896 { 00:15:29.896 "name": "BaseBdev3", 00:15:29.896 "uuid": "de858497-606c-5e57-abd0-0a29af0a879c", 00:15:29.896 "is_configured": true, 00:15:29.896 "data_offset": 2048, 00:15:29.896 "data_size": 63488 00:15:29.896 } 00:15:29.896 ] 00:15:29.896 }' 00:15:29.896 09:52:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.896 09:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:29.896 09:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.896 09:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:29.896 09:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:29.896 09:52:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:29.896 09:52:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:29.896 09:52:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:29.896 09:52:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:29.896 09:52:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:29.896 09:52:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:29.896 09:52:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:29.896 09:52:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.896 09:52:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.896 [2024-12-06 09:52:55.072815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:29.896 [2024-12-06 09:52:55.072982] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:29.896 [2024-12-06 09:52:55.072998] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:29.896 request: 00:15:29.896 { 00:15:29.896 "base_bdev": "BaseBdev1", 00:15:29.896 "raid_bdev": "raid_bdev1", 00:15:29.896 "method": "bdev_raid_add_base_bdev", 00:15:29.896 "req_id": 1 00:15:29.896 } 00:15:29.896 Got JSON-RPC error response 00:15:29.896 response: 00:15:29.896 { 00:15:29.896 "code": -22, 00:15:29.896 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:29.896 } 00:15:29.896 09:52:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:29.896 09:52:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:29.896 09:52:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:29.896 09:52:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:29.896 09:52:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:29.896 09:52:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:30.842 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:30.842 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.842 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.842 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.842 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.842 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:30.842 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.842 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.842 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.842 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.842 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.842 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.842 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.842 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.842 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.101 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.101 "name": "raid_bdev1", 00:15:31.101 "uuid": "67c5e324-732a-45dc-80bc-c846cd1ba126", 00:15:31.101 "strip_size_kb": 64, 00:15:31.101 "state": "online", 00:15:31.101 "raid_level": "raid5f", 00:15:31.101 "superblock": true, 00:15:31.101 "num_base_bdevs": 3, 00:15:31.101 "num_base_bdevs_discovered": 2, 00:15:31.101 "num_base_bdevs_operational": 2, 00:15:31.101 "base_bdevs_list": [ 00:15:31.101 { 00:15:31.101 "name": null, 00:15:31.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.101 "is_configured": false, 00:15:31.101 "data_offset": 0, 00:15:31.101 "data_size": 63488 00:15:31.101 }, 00:15:31.101 { 00:15:31.101 "name": "BaseBdev2", 00:15:31.101 "uuid": "7eee5e8d-d494-5266-b79c-f51f1d0d8424", 00:15:31.101 "is_configured": true, 00:15:31.101 "data_offset": 2048, 00:15:31.101 "data_size": 63488 00:15:31.101 }, 00:15:31.101 { 00:15:31.101 "name": "BaseBdev3", 00:15:31.101 "uuid": "de858497-606c-5e57-abd0-0a29af0a879c", 00:15:31.101 "is_configured": true, 00:15:31.101 "data_offset": 2048, 00:15:31.101 "data_size": 63488 00:15:31.101 } 00:15:31.101 ] 00:15:31.101 }' 00:15:31.101 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.101 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.360 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:31.360 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.360 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:31.360 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:31.360 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.360 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.360 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.360 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.360 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.360 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.360 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.360 "name": "raid_bdev1", 00:15:31.360 "uuid": "67c5e324-732a-45dc-80bc-c846cd1ba126", 00:15:31.360 "strip_size_kb": 64, 00:15:31.360 "state": "online", 00:15:31.360 "raid_level": "raid5f", 00:15:31.360 "superblock": true, 00:15:31.360 "num_base_bdevs": 3, 00:15:31.360 "num_base_bdevs_discovered": 2, 00:15:31.360 "num_base_bdevs_operational": 2, 00:15:31.360 "base_bdevs_list": [ 00:15:31.360 { 00:15:31.360 "name": null, 00:15:31.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.360 "is_configured": false, 00:15:31.360 "data_offset": 0, 00:15:31.360 "data_size": 63488 00:15:31.360 }, 00:15:31.360 { 00:15:31.360 "name": "BaseBdev2", 00:15:31.360 "uuid": "7eee5e8d-d494-5266-b79c-f51f1d0d8424", 00:15:31.360 "is_configured": true, 00:15:31.360 "data_offset": 2048, 00:15:31.360 "data_size": 63488 00:15:31.360 }, 00:15:31.360 { 00:15:31.360 "name": "BaseBdev3", 00:15:31.360 "uuid": "de858497-606c-5e57-abd0-0a29af0a879c", 00:15:31.360 "is_configured": true, 00:15:31.360 "data_offset": 2048, 00:15:31.360 "data_size": 63488 00:15:31.360 } 00:15:31.360 ] 00:15:31.360 }' 00:15:31.360 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.360 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:31.360 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.620 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:31.620 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 81886 00:15:31.620 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81886 ']' 00:15:31.620 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 81886 00:15:31.620 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:31.620 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:31.620 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81886 00:15:31.620 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:31.620 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:31.620 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81886' 00:15:31.620 killing process with pid 81886 00:15:31.620 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 81886 00:15:31.620 Received shutdown signal, test time was about 60.000000 seconds 00:15:31.620 00:15:31.620 Latency(us) 00:15:31.620 [2024-12-06T09:52:56.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.620 [2024-12-06T09:52:56.893Z] =================================================================================================================== 00:15:31.620 [2024-12-06T09:52:56.893Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:31.620 [2024-12-06 09:52:56.714844] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:31.620 [2024-12-06 09:52:56.714960] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:31.620 09:52:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 81886 00:15:31.620 [2024-12-06 09:52:56.715024] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:31.620 [2024-12-06 09:52:56.715035] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:31.879 [2024-12-06 09:52:57.100946] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:33.262 09:52:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:33.262 00:15:33.262 real 0m23.230s 00:15:33.262 user 0m29.849s 00:15:33.262 sys 0m2.637s 00:15:33.263 09:52:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:33.263 09:52:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.263 ************************************ 00:15:33.263 END TEST raid5f_rebuild_test_sb 00:15:33.263 ************************************ 00:15:33.263 09:52:58 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:33.263 09:52:58 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:15:33.263 09:52:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:33.263 09:52:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:33.263 09:52:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:33.263 ************************************ 00:15:33.263 START TEST raid5f_state_function_test 00:15:33.263 ************************************ 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82634 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82634' 00:15:33.263 Process raid pid: 82634 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82634 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82634 ']' 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:33.263 09:52:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.263 [2024-12-06 09:52:58.360371] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:15:33.263 [2024-12-06 09:52:58.360565] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.523 [2024-12-06 09:52:58.534010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.523 [2024-12-06 09:52:58.647326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.782 [2024-12-06 09:52:58.845791] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.782 [2024-12-06 09:52:58.845909] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:34.041 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:34.041 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:34.041 09:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:34.041 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.041 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.041 [2024-12-06 09:52:59.182958] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:34.041 [2024-12-06 09:52:59.183067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:34.041 [2024-12-06 09:52:59.183081] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:34.041 [2024-12-06 09:52:59.183092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:34.041 [2024-12-06 09:52:59.183098] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:34.041 [2024-12-06 09:52:59.183107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:34.041 [2024-12-06 09:52:59.183113] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:34.041 [2024-12-06 09:52:59.183137] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:34.041 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.041 09:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:34.041 09:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.041 09:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.041 09:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.041 09:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.041 09:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:34.041 09:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.041 09:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.041 09:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.041 09:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.041 09:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.041 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.041 09:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.041 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.041 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.041 09:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.041 "name": "Existed_Raid", 00:15:34.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.041 "strip_size_kb": 64, 00:15:34.041 "state": "configuring", 00:15:34.041 "raid_level": "raid5f", 00:15:34.041 "superblock": false, 00:15:34.041 "num_base_bdevs": 4, 00:15:34.041 "num_base_bdevs_discovered": 0, 00:15:34.041 "num_base_bdevs_operational": 4, 00:15:34.041 "base_bdevs_list": [ 00:15:34.041 { 00:15:34.041 "name": "BaseBdev1", 00:15:34.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.041 "is_configured": false, 00:15:34.041 "data_offset": 0, 00:15:34.041 "data_size": 0 00:15:34.041 }, 00:15:34.041 { 00:15:34.041 "name": "BaseBdev2", 00:15:34.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.041 "is_configured": false, 00:15:34.041 "data_offset": 0, 00:15:34.041 "data_size": 0 00:15:34.041 }, 00:15:34.041 { 00:15:34.041 "name": "BaseBdev3", 00:15:34.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.041 "is_configured": false, 00:15:34.041 "data_offset": 0, 00:15:34.041 "data_size": 0 00:15:34.041 }, 00:15:34.041 { 00:15:34.041 "name": "BaseBdev4", 00:15:34.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.041 "is_configured": false, 00:15:34.041 "data_offset": 0, 00:15:34.041 "data_size": 0 00:15:34.041 } 00:15:34.041 ] 00:15:34.041 }' 00:15:34.041 09:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.041 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.610 09:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.611 [2024-12-06 09:52:59.602205] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:34.611 [2024-12-06 09:52:59.602286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.611 [2024-12-06 09:52:59.610200] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:34.611 [2024-12-06 09:52:59.610272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:34.611 [2024-12-06 09:52:59.610299] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:34.611 [2024-12-06 09:52:59.610321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:34.611 [2024-12-06 09:52:59.610338] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:34.611 [2024-12-06 09:52:59.610358] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:34.611 [2024-12-06 09:52:59.610375] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:34.611 [2024-12-06 09:52:59.610396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.611 [2024-12-06 09:52:59.653343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:34.611 BaseBdev1 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.611 [ 00:15:34.611 { 00:15:34.611 "name": "BaseBdev1", 00:15:34.611 "aliases": [ 00:15:34.611 "37ac31cb-8a89-4644-a4df-1e83050b74b0" 00:15:34.611 ], 00:15:34.611 "product_name": "Malloc disk", 00:15:34.611 "block_size": 512, 00:15:34.611 "num_blocks": 65536, 00:15:34.611 "uuid": "37ac31cb-8a89-4644-a4df-1e83050b74b0", 00:15:34.611 "assigned_rate_limits": { 00:15:34.611 "rw_ios_per_sec": 0, 00:15:34.611 "rw_mbytes_per_sec": 0, 00:15:34.611 "r_mbytes_per_sec": 0, 00:15:34.611 "w_mbytes_per_sec": 0 00:15:34.611 }, 00:15:34.611 "claimed": true, 00:15:34.611 "claim_type": "exclusive_write", 00:15:34.611 "zoned": false, 00:15:34.611 "supported_io_types": { 00:15:34.611 "read": true, 00:15:34.611 "write": true, 00:15:34.611 "unmap": true, 00:15:34.611 "flush": true, 00:15:34.611 "reset": true, 00:15:34.611 "nvme_admin": false, 00:15:34.611 "nvme_io": false, 00:15:34.611 "nvme_io_md": false, 00:15:34.611 "write_zeroes": true, 00:15:34.611 "zcopy": true, 00:15:34.611 "get_zone_info": false, 00:15:34.611 "zone_management": false, 00:15:34.611 "zone_append": false, 00:15:34.611 "compare": false, 00:15:34.611 "compare_and_write": false, 00:15:34.611 "abort": true, 00:15:34.611 "seek_hole": false, 00:15:34.611 "seek_data": false, 00:15:34.611 "copy": true, 00:15:34.611 "nvme_iov_md": false 00:15:34.611 }, 00:15:34.611 "memory_domains": [ 00:15:34.611 { 00:15:34.611 "dma_device_id": "system", 00:15:34.611 "dma_device_type": 1 00:15:34.611 }, 00:15:34.611 { 00:15:34.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.611 "dma_device_type": 2 00:15:34.611 } 00:15:34.611 ], 00:15:34.611 "driver_specific": {} 00:15:34.611 } 00:15:34.611 ] 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.611 "name": "Existed_Raid", 00:15:34.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.611 "strip_size_kb": 64, 00:15:34.611 "state": "configuring", 00:15:34.611 "raid_level": "raid5f", 00:15:34.611 "superblock": false, 00:15:34.611 "num_base_bdevs": 4, 00:15:34.611 "num_base_bdevs_discovered": 1, 00:15:34.611 "num_base_bdevs_operational": 4, 00:15:34.611 "base_bdevs_list": [ 00:15:34.611 { 00:15:34.611 "name": "BaseBdev1", 00:15:34.611 "uuid": "37ac31cb-8a89-4644-a4df-1e83050b74b0", 00:15:34.611 "is_configured": true, 00:15:34.611 "data_offset": 0, 00:15:34.611 "data_size": 65536 00:15:34.611 }, 00:15:34.611 { 00:15:34.611 "name": "BaseBdev2", 00:15:34.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.611 "is_configured": false, 00:15:34.611 "data_offset": 0, 00:15:34.611 "data_size": 0 00:15:34.611 }, 00:15:34.611 { 00:15:34.611 "name": "BaseBdev3", 00:15:34.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.611 "is_configured": false, 00:15:34.611 "data_offset": 0, 00:15:34.611 "data_size": 0 00:15:34.611 }, 00:15:34.611 { 00:15:34.611 "name": "BaseBdev4", 00:15:34.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.611 "is_configured": false, 00:15:34.611 "data_offset": 0, 00:15:34.611 "data_size": 0 00:15:34.611 } 00:15:34.611 ] 00:15:34.611 }' 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.611 09:52:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.181 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:35.181 09:53:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.181 09:53:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.181 [2024-12-06 09:53:00.156547] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:35.181 [2024-12-06 09:53:00.156597] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:35.181 09:53:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.181 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:35.181 09:53:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.181 09:53:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.181 [2024-12-06 09:53:00.168573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:35.181 [2024-12-06 09:53:00.170319] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:35.181 [2024-12-06 09:53:00.170402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:35.181 [2024-12-06 09:53:00.170417] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:35.181 [2024-12-06 09:53:00.170428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:35.181 [2024-12-06 09:53:00.170434] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:35.181 [2024-12-06 09:53:00.170442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:35.181 09:53:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.181 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:35.181 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:35.181 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:35.181 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.181 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.181 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.181 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.181 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:35.181 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.181 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.181 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.181 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.181 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.181 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.181 09:53:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.181 09:53:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.181 09:53:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.181 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.181 "name": "Existed_Raid", 00:15:35.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.181 "strip_size_kb": 64, 00:15:35.181 "state": "configuring", 00:15:35.181 "raid_level": "raid5f", 00:15:35.181 "superblock": false, 00:15:35.181 "num_base_bdevs": 4, 00:15:35.181 "num_base_bdevs_discovered": 1, 00:15:35.181 "num_base_bdevs_operational": 4, 00:15:35.182 "base_bdevs_list": [ 00:15:35.182 { 00:15:35.182 "name": "BaseBdev1", 00:15:35.182 "uuid": "37ac31cb-8a89-4644-a4df-1e83050b74b0", 00:15:35.182 "is_configured": true, 00:15:35.182 "data_offset": 0, 00:15:35.182 "data_size": 65536 00:15:35.182 }, 00:15:35.182 { 00:15:35.182 "name": "BaseBdev2", 00:15:35.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.182 "is_configured": false, 00:15:35.182 "data_offset": 0, 00:15:35.182 "data_size": 0 00:15:35.182 }, 00:15:35.182 { 00:15:35.182 "name": "BaseBdev3", 00:15:35.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.182 "is_configured": false, 00:15:35.182 "data_offset": 0, 00:15:35.182 "data_size": 0 00:15:35.182 }, 00:15:35.182 { 00:15:35.182 "name": "BaseBdev4", 00:15:35.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.182 "is_configured": false, 00:15:35.182 "data_offset": 0, 00:15:35.182 "data_size": 0 00:15:35.182 } 00:15:35.182 ] 00:15:35.182 }' 00:15:35.182 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.182 09:53:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.442 [2024-12-06 09:53:00.636313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:35.442 BaseBdev2 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.442 [ 00:15:35.442 { 00:15:35.442 "name": "BaseBdev2", 00:15:35.442 "aliases": [ 00:15:35.442 "885bf90b-6c50-491d-a362-023e4372e183" 00:15:35.442 ], 00:15:35.442 "product_name": "Malloc disk", 00:15:35.442 "block_size": 512, 00:15:35.442 "num_blocks": 65536, 00:15:35.442 "uuid": "885bf90b-6c50-491d-a362-023e4372e183", 00:15:35.442 "assigned_rate_limits": { 00:15:35.442 "rw_ios_per_sec": 0, 00:15:35.442 "rw_mbytes_per_sec": 0, 00:15:35.442 "r_mbytes_per_sec": 0, 00:15:35.442 "w_mbytes_per_sec": 0 00:15:35.442 }, 00:15:35.442 "claimed": true, 00:15:35.442 "claim_type": "exclusive_write", 00:15:35.442 "zoned": false, 00:15:35.442 "supported_io_types": { 00:15:35.442 "read": true, 00:15:35.442 "write": true, 00:15:35.442 "unmap": true, 00:15:35.442 "flush": true, 00:15:35.442 "reset": true, 00:15:35.442 "nvme_admin": false, 00:15:35.442 "nvme_io": false, 00:15:35.442 "nvme_io_md": false, 00:15:35.442 "write_zeroes": true, 00:15:35.442 "zcopy": true, 00:15:35.442 "get_zone_info": false, 00:15:35.442 "zone_management": false, 00:15:35.442 "zone_append": false, 00:15:35.442 "compare": false, 00:15:35.442 "compare_and_write": false, 00:15:35.442 "abort": true, 00:15:35.442 "seek_hole": false, 00:15:35.442 "seek_data": false, 00:15:35.442 "copy": true, 00:15:35.442 "nvme_iov_md": false 00:15:35.442 }, 00:15:35.442 "memory_domains": [ 00:15:35.442 { 00:15:35.442 "dma_device_id": "system", 00:15:35.442 "dma_device_type": 1 00:15:35.442 }, 00:15:35.442 { 00:15:35.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.442 "dma_device_type": 2 00:15:35.442 } 00:15:35.442 ], 00:15:35.442 "driver_specific": {} 00:15:35.442 } 00:15:35.442 ] 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.442 09:53:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.702 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.702 "name": "Existed_Raid", 00:15:35.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.702 "strip_size_kb": 64, 00:15:35.702 "state": "configuring", 00:15:35.702 "raid_level": "raid5f", 00:15:35.702 "superblock": false, 00:15:35.702 "num_base_bdevs": 4, 00:15:35.702 "num_base_bdevs_discovered": 2, 00:15:35.702 "num_base_bdevs_operational": 4, 00:15:35.702 "base_bdevs_list": [ 00:15:35.702 { 00:15:35.702 "name": "BaseBdev1", 00:15:35.702 "uuid": "37ac31cb-8a89-4644-a4df-1e83050b74b0", 00:15:35.702 "is_configured": true, 00:15:35.702 "data_offset": 0, 00:15:35.702 "data_size": 65536 00:15:35.702 }, 00:15:35.702 { 00:15:35.702 "name": "BaseBdev2", 00:15:35.702 "uuid": "885bf90b-6c50-491d-a362-023e4372e183", 00:15:35.702 "is_configured": true, 00:15:35.702 "data_offset": 0, 00:15:35.702 "data_size": 65536 00:15:35.702 }, 00:15:35.702 { 00:15:35.702 "name": "BaseBdev3", 00:15:35.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.702 "is_configured": false, 00:15:35.702 "data_offset": 0, 00:15:35.702 "data_size": 0 00:15:35.702 }, 00:15:35.702 { 00:15:35.702 "name": "BaseBdev4", 00:15:35.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.702 "is_configured": false, 00:15:35.702 "data_offset": 0, 00:15:35.702 "data_size": 0 00:15:35.702 } 00:15:35.702 ] 00:15:35.702 }' 00:15:35.702 09:53:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.702 09:53:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.962 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.963 [2024-12-06 09:53:01.085710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:35.963 BaseBdev3 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.963 [ 00:15:35.963 { 00:15:35.963 "name": "BaseBdev3", 00:15:35.963 "aliases": [ 00:15:35.963 "a2710940-b100-4914-a173-69405137508c" 00:15:35.963 ], 00:15:35.963 "product_name": "Malloc disk", 00:15:35.963 "block_size": 512, 00:15:35.963 "num_blocks": 65536, 00:15:35.963 "uuid": "a2710940-b100-4914-a173-69405137508c", 00:15:35.963 "assigned_rate_limits": { 00:15:35.963 "rw_ios_per_sec": 0, 00:15:35.963 "rw_mbytes_per_sec": 0, 00:15:35.963 "r_mbytes_per_sec": 0, 00:15:35.963 "w_mbytes_per_sec": 0 00:15:35.963 }, 00:15:35.963 "claimed": true, 00:15:35.963 "claim_type": "exclusive_write", 00:15:35.963 "zoned": false, 00:15:35.963 "supported_io_types": { 00:15:35.963 "read": true, 00:15:35.963 "write": true, 00:15:35.963 "unmap": true, 00:15:35.963 "flush": true, 00:15:35.963 "reset": true, 00:15:35.963 "nvme_admin": false, 00:15:35.963 "nvme_io": false, 00:15:35.963 "nvme_io_md": false, 00:15:35.963 "write_zeroes": true, 00:15:35.963 "zcopy": true, 00:15:35.963 "get_zone_info": false, 00:15:35.963 "zone_management": false, 00:15:35.963 "zone_append": false, 00:15:35.963 "compare": false, 00:15:35.963 "compare_and_write": false, 00:15:35.963 "abort": true, 00:15:35.963 "seek_hole": false, 00:15:35.963 "seek_data": false, 00:15:35.963 "copy": true, 00:15:35.963 "nvme_iov_md": false 00:15:35.963 }, 00:15:35.963 "memory_domains": [ 00:15:35.963 { 00:15:35.963 "dma_device_id": "system", 00:15:35.963 "dma_device_type": 1 00:15:35.963 }, 00:15:35.963 { 00:15:35.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.963 "dma_device_type": 2 00:15:35.963 } 00:15:35.963 ], 00:15:35.963 "driver_specific": {} 00:15:35.963 } 00:15:35.963 ] 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.963 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.963 "name": "Existed_Raid", 00:15:35.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.963 "strip_size_kb": 64, 00:15:35.963 "state": "configuring", 00:15:35.963 "raid_level": "raid5f", 00:15:35.963 "superblock": false, 00:15:35.963 "num_base_bdevs": 4, 00:15:35.963 "num_base_bdevs_discovered": 3, 00:15:35.963 "num_base_bdevs_operational": 4, 00:15:35.963 "base_bdevs_list": [ 00:15:35.963 { 00:15:35.963 "name": "BaseBdev1", 00:15:35.963 "uuid": "37ac31cb-8a89-4644-a4df-1e83050b74b0", 00:15:35.963 "is_configured": true, 00:15:35.963 "data_offset": 0, 00:15:35.963 "data_size": 65536 00:15:35.963 }, 00:15:35.963 { 00:15:35.963 "name": "BaseBdev2", 00:15:35.963 "uuid": "885bf90b-6c50-491d-a362-023e4372e183", 00:15:35.963 "is_configured": true, 00:15:35.963 "data_offset": 0, 00:15:35.963 "data_size": 65536 00:15:35.963 }, 00:15:35.963 { 00:15:35.963 "name": "BaseBdev3", 00:15:35.963 "uuid": "a2710940-b100-4914-a173-69405137508c", 00:15:35.963 "is_configured": true, 00:15:35.963 "data_offset": 0, 00:15:35.963 "data_size": 65536 00:15:35.963 }, 00:15:35.963 { 00:15:35.963 "name": "BaseBdev4", 00:15:35.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.963 "is_configured": false, 00:15:35.963 "data_offset": 0, 00:15:35.963 "data_size": 0 00:15:35.964 } 00:15:35.964 ] 00:15:35.964 }' 00:15:35.964 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.964 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.535 [2024-12-06 09:53:01.608915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:36.535 [2024-12-06 09:53:01.609047] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:36.535 [2024-12-06 09:53:01.609076] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:36.535 [2024-12-06 09:53:01.609380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:36.535 [2024-12-06 09:53:01.616422] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:36.535 [2024-12-06 09:53:01.616480] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:36.535 [2024-12-06 09:53:01.616795] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.535 BaseBdev4 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.535 [ 00:15:36.535 { 00:15:36.535 "name": "BaseBdev4", 00:15:36.535 "aliases": [ 00:15:36.535 "0a1508ae-696e-410d-a2db-e5ec7d45b0f1" 00:15:36.535 ], 00:15:36.535 "product_name": "Malloc disk", 00:15:36.535 "block_size": 512, 00:15:36.535 "num_blocks": 65536, 00:15:36.535 "uuid": "0a1508ae-696e-410d-a2db-e5ec7d45b0f1", 00:15:36.535 "assigned_rate_limits": { 00:15:36.535 "rw_ios_per_sec": 0, 00:15:36.535 "rw_mbytes_per_sec": 0, 00:15:36.535 "r_mbytes_per_sec": 0, 00:15:36.535 "w_mbytes_per_sec": 0 00:15:36.535 }, 00:15:36.535 "claimed": true, 00:15:36.535 "claim_type": "exclusive_write", 00:15:36.535 "zoned": false, 00:15:36.535 "supported_io_types": { 00:15:36.535 "read": true, 00:15:36.535 "write": true, 00:15:36.535 "unmap": true, 00:15:36.535 "flush": true, 00:15:36.535 "reset": true, 00:15:36.535 "nvme_admin": false, 00:15:36.535 "nvme_io": false, 00:15:36.535 "nvme_io_md": false, 00:15:36.535 "write_zeroes": true, 00:15:36.535 "zcopy": true, 00:15:36.535 "get_zone_info": false, 00:15:36.535 "zone_management": false, 00:15:36.535 "zone_append": false, 00:15:36.535 "compare": false, 00:15:36.535 "compare_and_write": false, 00:15:36.535 "abort": true, 00:15:36.535 "seek_hole": false, 00:15:36.535 "seek_data": false, 00:15:36.535 "copy": true, 00:15:36.535 "nvme_iov_md": false 00:15:36.535 }, 00:15:36.535 "memory_domains": [ 00:15:36.535 { 00:15:36.535 "dma_device_id": "system", 00:15:36.535 "dma_device_type": 1 00:15:36.535 }, 00:15:36.535 { 00:15:36.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.535 "dma_device_type": 2 00:15:36.535 } 00:15:36.535 ], 00:15:36.535 "driver_specific": {} 00:15:36.535 } 00:15:36.535 ] 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.535 "name": "Existed_Raid", 00:15:36.535 "uuid": "9765c7f5-394f-4469-8319-34c65f79b13f", 00:15:36.535 "strip_size_kb": 64, 00:15:36.535 "state": "online", 00:15:36.535 "raid_level": "raid5f", 00:15:36.535 "superblock": false, 00:15:36.535 "num_base_bdevs": 4, 00:15:36.535 "num_base_bdevs_discovered": 4, 00:15:36.535 "num_base_bdevs_operational": 4, 00:15:36.535 "base_bdevs_list": [ 00:15:36.535 { 00:15:36.535 "name": "BaseBdev1", 00:15:36.535 "uuid": "37ac31cb-8a89-4644-a4df-1e83050b74b0", 00:15:36.535 "is_configured": true, 00:15:36.535 "data_offset": 0, 00:15:36.535 "data_size": 65536 00:15:36.535 }, 00:15:36.535 { 00:15:36.535 "name": "BaseBdev2", 00:15:36.535 "uuid": "885bf90b-6c50-491d-a362-023e4372e183", 00:15:36.535 "is_configured": true, 00:15:36.535 "data_offset": 0, 00:15:36.535 "data_size": 65536 00:15:36.535 }, 00:15:36.535 { 00:15:36.535 "name": "BaseBdev3", 00:15:36.535 "uuid": "a2710940-b100-4914-a173-69405137508c", 00:15:36.535 "is_configured": true, 00:15:36.535 "data_offset": 0, 00:15:36.535 "data_size": 65536 00:15:36.535 }, 00:15:36.535 { 00:15:36.535 "name": "BaseBdev4", 00:15:36.535 "uuid": "0a1508ae-696e-410d-a2db-e5ec7d45b0f1", 00:15:36.535 "is_configured": true, 00:15:36.535 "data_offset": 0, 00:15:36.535 "data_size": 65536 00:15:36.535 } 00:15:36.535 ] 00:15:36.535 }' 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.535 09:53:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.105 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:37.105 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:37.105 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:37.105 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:37.105 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:37.106 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:37.106 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:37.106 09:53:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.106 09:53:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.106 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:37.106 [2024-12-06 09:53:02.120499] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:37.106 09:53:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.106 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:37.106 "name": "Existed_Raid", 00:15:37.106 "aliases": [ 00:15:37.106 "9765c7f5-394f-4469-8319-34c65f79b13f" 00:15:37.106 ], 00:15:37.106 "product_name": "Raid Volume", 00:15:37.106 "block_size": 512, 00:15:37.106 "num_blocks": 196608, 00:15:37.106 "uuid": "9765c7f5-394f-4469-8319-34c65f79b13f", 00:15:37.106 "assigned_rate_limits": { 00:15:37.106 "rw_ios_per_sec": 0, 00:15:37.106 "rw_mbytes_per_sec": 0, 00:15:37.106 "r_mbytes_per_sec": 0, 00:15:37.106 "w_mbytes_per_sec": 0 00:15:37.106 }, 00:15:37.106 "claimed": false, 00:15:37.106 "zoned": false, 00:15:37.106 "supported_io_types": { 00:15:37.106 "read": true, 00:15:37.106 "write": true, 00:15:37.106 "unmap": false, 00:15:37.106 "flush": false, 00:15:37.106 "reset": true, 00:15:37.106 "nvme_admin": false, 00:15:37.106 "nvme_io": false, 00:15:37.106 "nvme_io_md": false, 00:15:37.106 "write_zeroes": true, 00:15:37.106 "zcopy": false, 00:15:37.106 "get_zone_info": false, 00:15:37.106 "zone_management": false, 00:15:37.106 "zone_append": false, 00:15:37.106 "compare": false, 00:15:37.106 "compare_and_write": false, 00:15:37.106 "abort": false, 00:15:37.106 "seek_hole": false, 00:15:37.106 "seek_data": false, 00:15:37.106 "copy": false, 00:15:37.106 "nvme_iov_md": false 00:15:37.106 }, 00:15:37.106 "driver_specific": { 00:15:37.106 "raid": { 00:15:37.106 "uuid": "9765c7f5-394f-4469-8319-34c65f79b13f", 00:15:37.106 "strip_size_kb": 64, 00:15:37.106 "state": "online", 00:15:37.106 "raid_level": "raid5f", 00:15:37.106 "superblock": false, 00:15:37.106 "num_base_bdevs": 4, 00:15:37.106 "num_base_bdevs_discovered": 4, 00:15:37.106 "num_base_bdevs_operational": 4, 00:15:37.106 "base_bdevs_list": [ 00:15:37.106 { 00:15:37.106 "name": "BaseBdev1", 00:15:37.106 "uuid": "37ac31cb-8a89-4644-a4df-1e83050b74b0", 00:15:37.106 "is_configured": true, 00:15:37.106 "data_offset": 0, 00:15:37.106 "data_size": 65536 00:15:37.106 }, 00:15:37.106 { 00:15:37.106 "name": "BaseBdev2", 00:15:37.106 "uuid": "885bf90b-6c50-491d-a362-023e4372e183", 00:15:37.106 "is_configured": true, 00:15:37.106 "data_offset": 0, 00:15:37.106 "data_size": 65536 00:15:37.106 }, 00:15:37.106 { 00:15:37.106 "name": "BaseBdev3", 00:15:37.106 "uuid": "a2710940-b100-4914-a173-69405137508c", 00:15:37.106 "is_configured": true, 00:15:37.106 "data_offset": 0, 00:15:37.106 "data_size": 65536 00:15:37.106 }, 00:15:37.106 { 00:15:37.106 "name": "BaseBdev4", 00:15:37.106 "uuid": "0a1508ae-696e-410d-a2db-e5ec7d45b0f1", 00:15:37.106 "is_configured": true, 00:15:37.106 "data_offset": 0, 00:15:37.106 "data_size": 65536 00:15:37.106 } 00:15:37.106 ] 00:15:37.106 } 00:15:37.106 } 00:15:37.106 }' 00:15:37.106 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:37.106 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:37.106 BaseBdev2 00:15:37.106 BaseBdev3 00:15:37.106 BaseBdev4' 00:15:37.106 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.106 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:37.106 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.106 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:37.106 09:53:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.106 09:53:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.106 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.106 09:53:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.106 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.106 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.106 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.106 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.106 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:37.106 09:53:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.106 09:53:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.106 09:53:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.106 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.106 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.106 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.106 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:37.106 09:53:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.106 09:53:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.106 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.106 09:53:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.367 [2024-12-06 09:53:02.459737] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.367 "name": "Existed_Raid", 00:15:37.367 "uuid": "9765c7f5-394f-4469-8319-34c65f79b13f", 00:15:37.367 "strip_size_kb": 64, 00:15:37.367 "state": "online", 00:15:37.367 "raid_level": "raid5f", 00:15:37.367 "superblock": false, 00:15:37.367 "num_base_bdevs": 4, 00:15:37.367 "num_base_bdevs_discovered": 3, 00:15:37.367 "num_base_bdevs_operational": 3, 00:15:37.367 "base_bdevs_list": [ 00:15:37.367 { 00:15:37.367 "name": null, 00:15:37.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.367 "is_configured": false, 00:15:37.367 "data_offset": 0, 00:15:37.367 "data_size": 65536 00:15:37.367 }, 00:15:37.367 { 00:15:37.367 "name": "BaseBdev2", 00:15:37.367 "uuid": "885bf90b-6c50-491d-a362-023e4372e183", 00:15:37.367 "is_configured": true, 00:15:37.367 "data_offset": 0, 00:15:37.367 "data_size": 65536 00:15:37.367 }, 00:15:37.367 { 00:15:37.367 "name": "BaseBdev3", 00:15:37.367 "uuid": "a2710940-b100-4914-a173-69405137508c", 00:15:37.367 "is_configured": true, 00:15:37.367 "data_offset": 0, 00:15:37.367 "data_size": 65536 00:15:37.367 }, 00:15:37.367 { 00:15:37.367 "name": "BaseBdev4", 00:15:37.367 "uuid": "0a1508ae-696e-410d-a2db-e5ec7d45b0f1", 00:15:37.367 "is_configured": true, 00:15:37.367 "data_offset": 0, 00:15:37.367 "data_size": 65536 00:15:37.367 } 00:15:37.367 ] 00:15:37.367 }' 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.367 09:53:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.979 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:37.979 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:37.979 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:37.979 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.979 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.979 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.980 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.980 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:37.980 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:37.980 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:37.980 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.980 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.980 [2024-12-06 09:53:03.044424] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:37.980 [2024-12-06 09:53:03.044565] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:37.980 [2024-12-06 09:53:03.139413] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.980 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.980 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:37.980 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:37.980 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:37.980 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.980 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.980 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.980 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.980 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:37.980 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:37.980 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:37.980 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.980 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.980 [2024-12-06 09:53:03.199340] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:38.247 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.247 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:38.247 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:38.247 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.247 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:38.247 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.247 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.247 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.247 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:38.247 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:38.247 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:38.247 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.247 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.247 [2024-12-06 09:53:03.349564] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:38.247 [2024-12-06 09:53:03.349660] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:38.247 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.247 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:38.247 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:38.247 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.247 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.247 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.247 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:38.247 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.247 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:38.247 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:38.247 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:38.247 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:38.247 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:38.247 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:38.247 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.247 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.507 BaseBdev2 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.508 [ 00:15:38.508 { 00:15:38.508 "name": "BaseBdev2", 00:15:38.508 "aliases": [ 00:15:38.508 "9691c55d-ed61-466d-bb65-e55e47654a8b" 00:15:38.508 ], 00:15:38.508 "product_name": "Malloc disk", 00:15:38.508 "block_size": 512, 00:15:38.508 "num_blocks": 65536, 00:15:38.508 "uuid": "9691c55d-ed61-466d-bb65-e55e47654a8b", 00:15:38.508 "assigned_rate_limits": { 00:15:38.508 "rw_ios_per_sec": 0, 00:15:38.508 "rw_mbytes_per_sec": 0, 00:15:38.508 "r_mbytes_per_sec": 0, 00:15:38.508 "w_mbytes_per_sec": 0 00:15:38.508 }, 00:15:38.508 "claimed": false, 00:15:38.508 "zoned": false, 00:15:38.508 "supported_io_types": { 00:15:38.508 "read": true, 00:15:38.508 "write": true, 00:15:38.508 "unmap": true, 00:15:38.508 "flush": true, 00:15:38.508 "reset": true, 00:15:38.508 "nvme_admin": false, 00:15:38.508 "nvme_io": false, 00:15:38.508 "nvme_io_md": false, 00:15:38.508 "write_zeroes": true, 00:15:38.508 "zcopy": true, 00:15:38.508 "get_zone_info": false, 00:15:38.508 "zone_management": false, 00:15:38.508 "zone_append": false, 00:15:38.508 "compare": false, 00:15:38.508 "compare_and_write": false, 00:15:38.508 "abort": true, 00:15:38.508 "seek_hole": false, 00:15:38.508 "seek_data": false, 00:15:38.508 "copy": true, 00:15:38.508 "nvme_iov_md": false 00:15:38.508 }, 00:15:38.508 "memory_domains": [ 00:15:38.508 { 00:15:38.508 "dma_device_id": "system", 00:15:38.508 "dma_device_type": 1 00:15:38.508 }, 00:15:38.508 { 00:15:38.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.508 "dma_device_type": 2 00:15:38.508 } 00:15:38.508 ], 00:15:38.508 "driver_specific": {} 00:15:38.508 } 00:15:38.508 ] 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.508 BaseBdev3 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.508 [ 00:15:38.508 { 00:15:38.508 "name": "BaseBdev3", 00:15:38.508 "aliases": [ 00:15:38.508 "54f95ba9-83d7-4e68-a9f5-ee8fba66c819" 00:15:38.508 ], 00:15:38.508 "product_name": "Malloc disk", 00:15:38.508 "block_size": 512, 00:15:38.508 "num_blocks": 65536, 00:15:38.508 "uuid": "54f95ba9-83d7-4e68-a9f5-ee8fba66c819", 00:15:38.508 "assigned_rate_limits": { 00:15:38.508 "rw_ios_per_sec": 0, 00:15:38.508 "rw_mbytes_per_sec": 0, 00:15:38.508 "r_mbytes_per_sec": 0, 00:15:38.508 "w_mbytes_per_sec": 0 00:15:38.508 }, 00:15:38.508 "claimed": false, 00:15:38.508 "zoned": false, 00:15:38.508 "supported_io_types": { 00:15:38.508 "read": true, 00:15:38.508 "write": true, 00:15:38.508 "unmap": true, 00:15:38.508 "flush": true, 00:15:38.508 "reset": true, 00:15:38.508 "nvme_admin": false, 00:15:38.508 "nvme_io": false, 00:15:38.508 "nvme_io_md": false, 00:15:38.508 "write_zeroes": true, 00:15:38.508 "zcopy": true, 00:15:38.508 "get_zone_info": false, 00:15:38.508 "zone_management": false, 00:15:38.508 "zone_append": false, 00:15:38.508 "compare": false, 00:15:38.508 "compare_and_write": false, 00:15:38.508 "abort": true, 00:15:38.508 "seek_hole": false, 00:15:38.508 "seek_data": false, 00:15:38.508 "copy": true, 00:15:38.508 "nvme_iov_md": false 00:15:38.508 }, 00:15:38.508 "memory_domains": [ 00:15:38.508 { 00:15:38.508 "dma_device_id": "system", 00:15:38.508 "dma_device_type": 1 00:15:38.508 }, 00:15:38.508 { 00:15:38.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.508 "dma_device_type": 2 00:15:38.508 } 00:15:38.508 ], 00:15:38.508 "driver_specific": {} 00:15:38.508 } 00:15:38.508 ] 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.508 BaseBdev4 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:38.508 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.509 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.509 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.509 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:38.509 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.509 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.509 [ 00:15:38.509 { 00:15:38.509 "name": "BaseBdev4", 00:15:38.509 "aliases": [ 00:15:38.509 "19059c9d-ad35-47e4-9b2c-202761826cbe" 00:15:38.509 ], 00:15:38.509 "product_name": "Malloc disk", 00:15:38.509 "block_size": 512, 00:15:38.509 "num_blocks": 65536, 00:15:38.509 "uuid": "19059c9d-ad35-47e4-9b2c-202761826cbe", 00:15:38.509 "assigned_rate_limits": { 00:15:38.509 "rw_ios_per_sec": 0, 00:15:38.509 "rw_mbytes_per_sec": 0, 00:15:38.509 "r_mbytes_per_sec": 0, 00:15:38.509 "w_mbytes_per_sec": 0 00:15:38.509 }, 00:15:38.509 "claimed": false, 00:15:38.509 "zoned": false, 00:15:38.509 "supported_io_types": { 00:15:38.509 "read": true, 00:15:38.509 "write": true, 00:15:38.509 "unmap": true, 00:15:38.509 "flush": true, 00:15:38.509 "reset": true, 00:15:38.509 "nvme_admin": false, 00:15:38.509 "nvme_io": false, 00:15:38.509 "nvme_io_md": false, 00:15:38.509 "write_zeroes": true, 00:15:38.509 "zcopy": true, 00:15:38.509 "get_zone_info": false, 00:15:38.509 "zone_management": false, 00:15:38.509 "zone_append": false, 00:15:38.509 "compare": false, 00:15:38.509 "compare_and_write": false, 00:15:38.509 "abort": true, 00:15:38.509 "seek_hole": false, 00:15:38.509 "seek_data": false, 00:15:38.509 "copy": true, 00:15:38.509 "nvme_iov_md": false 00:15:38.509 }, 00:15:38.509 "memory_domains": [ 00:15:38.509 { 00:15:38.509 "dma_device_id": "system", 00:15:38.509 "dma_device_type": 1 00:15:38.509 }, 00:15:38.509 { 00:15:38.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.509 "dma_device_type": 2 00:15:38.509 } 00:15:38.509 ], 00:15:38.509 "driver_specific": {} 00:15:38.509 } 00:15:38.509 ] 00:15:38.509 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.509 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:38.509 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:38.509 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:38.509 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:38.509 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.509 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.509 [2024-12-06 09:53:03.741591] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:38.509 [2024-12-06 09:53:03.741672] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:38.509 [2024-12-06 09:53:03.741717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:38.509 [2024-12-06 09:53:03.743437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:38.509 [2024-12-06 09:53:03.743523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:38.509 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.509 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:38.509 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.509 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.509 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.509 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.509 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:38.509 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.509 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.509 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.509 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.509 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.509 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.509 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.509 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.509 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.769 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.769 "name": "Existed_Raid", 00:15:38.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.769 "strip_size_kb": 64, 00:15:38.769 "state": "configuring", 00:15:38.769 "raid_level": "raid5f", 00:15:38.769 "superblock": false, 00:15:38.769 "num_base_bdevs": 4, 00:15:38.769 "num_base_bdevs_discovered": 3, 00:15:38.769 "num_base_bdevs_operational": 4, 00:15:38.769 "base_bdevs_list": [ 00:15:38.769 { 00:15:38.769 "name": "BaseBdev1", 00:15:38.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.769 "is_configured": false, 00:15:38.769 "data_offset": 0, 00:15:38.769 "data_size": 0 00:15:38.769 }, 00:15:38.769 { 00:15:38.769 "name": "BaseBdev2", 00:15:38.769 "uuid": "9691c55d-ed61-466d-bb65-e55e47654a8b", 00:15:38.769 "is_configured": true, 00:15:38.769 "data_offset": 0, 00:15:38.769 "data_size": 65536 00:15:38.769 }, 00:15:38.769 { 00:15:38.769 "name": "BaseBdev3", 00:15:38.769 "uuid": "54f95ba9-83d7-4e68-a9f5-ee8fba66c819", 00:15:38.769 "is_configured": true, 00:15:38.769 "data_offset": 0, 00:15:38.769 "data_size": 65536 00:15:38.769 }, 00:15:38.769 { 00:15:38.769 "name": "BaseBdev4", 00:15:38.769 "uuid": "19059c9d-ad35-47e4-9b2c-202761826cbe", 00:15:38.769 "is_configured": true, 00:15:38.769 "data_offset": 0, 00:15:38.769 "data_size": 65536 00:15:38.769 } 00:15:38.769 ] 00:15:38.769 }' 00:15:38.769 09:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.769 09:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.028 09:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:39.028 09:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.028 09:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.028 [2024-12-06 09:53:04.188840] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:39.028 09:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.028 09:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:39.028 09:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.028 09:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.028 09:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.028 09:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.028 09:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:39.028 09:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.028 09:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.028 09:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.028 09:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.028 09:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.028 09:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.028 09:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.028 09:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.028 09:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.028 09:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.028 "name": "Existed_Raid", 00:15:39.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.028 "strip_size_kb": 64, 00:15:39.028 "state": "configuring", 00:15:39.028 "raid_level": "raid5f", 00:15:39.028 "superblock": false, 00:15:39.028 "num_base_bdevs": 4, 00:15:39.028 "num_base_bdevs_discovered": 2, 00:15:39.028 "num_base_bdevs_operational": 4, 00:15:39.028 "base_bdevs_list": [ 00:15:39.028 { 00:15:39.028 "name": "BaseBdev1", 00:15:39.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.028 "is_configured": false, 00:15:39.029 "data_offset": 0, 00:15:39.029 "data_size": 0 00:15:39.029 }, 00:15:39.029 { 00:15:39.029 "name": null, 00:15:39.029 "uuid": "9691c55d-ed61-466d-bb65-e55e47654a8b", 00:15:39.029 "is_configured": false, 00:15:39.029 "data_offset": 0, 00:15:39.029 "data_size": 65536 00:15:39.029 }, 00:15:39.029 { 00:15:39.029 "name": "BaseBdev3", 00:15:39.029 "uuid": "54f95ba9-83d7-4e68-a9f5-ee8fba66c819", 00:15:39.029 "is_configured": true, 00:15:39.029 "data_offset": 0, 00:15:39.029 "data_size": 65536 00:15:39.029 }, 00:15:39.029 { 00:15:39.029 "name": "BaseBdev4", 00:15:39.029 "uuid": "19059c9d-ad35-47e4-9b2c-202761826cbe", 00:15:39.029 "is_configured": true, 00:15:39.029 "data_offset": 0, 00:15:39.029 "data_size": 65536 00:15:39.029 } 00:15:39.029 ] 00:15:39.029 }' 00:15:39.029 09:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.029 09:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.597 09:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.597 09:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:39.597 09:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.597 09:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.597 09:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.597 09:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:39.597 09:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:39.597 09:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.597 09:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.597 [2024-12-06 09:53:04.727855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:39.597 BaseBdev1 00:15:39.597 09:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.597 09:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:39.597 09:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:39.597 09:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:39.597 09:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:39.597 09:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:39.597 09:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:39.597 09:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:39.597 09:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.597 09:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.597 09:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.597 09:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:39.597 09:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.597 09:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.597 [ 00:15:39.597 { 00:15:39.597 "name": "BaseBdev1", 00:15:39.597 "aliases": [ 00:15:39.598 "13b544be-3be8-4e9c-b330-ff480f698bd9" 00:15:39.598 ], 00:15:39.598 "product_name": "Malloc disk", 00:15:39.598 "block_size": 512, 00:15:39.598 "num_blocks": 65536, 00:15:39.598 "uuid": "13b544be-3be8-4e9c-b330-ff480f698bd9", 00:15:39.598 "assigned_rate_limits": { 00:15:39.598 "rw_ios_per_sec": 0, 00:15:39.598 "rw_mbytes_per_sec": 0, 00:15:39.598 "r_mbytes_per_sec": 0, 00:15:39.598 "w_mbytes_per_sec": 0 00:15:39.598 }, 00:15:39.598 "claimed": true, 00:15:39.598 "claim_type": "exclusive_write", 00:15:39.598 "zoned": false, 00:15:39.598 "supported_io_types": { 00:15:39.598 "read": true, 00:15:39.598 "write": true, 00:15:39.598 "unmap": true, 00:15:39.598 "flush": true, 00:15:39.598 "reset": true, 00:15:39.598 "nvme_admin": false, 00:15:39.598 "nvme_io": false, 00:15:39.598 "nvme_io_md": false, 00:15:39.598 "write_zeroes": true, 00:15:39.598 "zcopy": true, 00:15:39.598 "get_zone_info": false, 00:15:39.598 "zone_management": false, 00:15:39.598 "zone_append": false, 00:15:39.598 "compare": false, 00:15:39.598 "compare_and_write": false, 00:15:39.598 "abort": true, 00:15:39.598 "seek_hole": false, 00:15:39.598 "seek_data": false, 00:15:39.598 "copy": true, 00:15:39.598 "nvme_iov_md": false 00:15:39.598 }, 00:15:39.598 "memory_domains": [ 00:15:39.598 { 00:15:39.598 "dma_device_id": "system", 00:15:39.598 "dma_device_type": 1 00:15:39.598 }, 00:15:39.598 { 00:15:39.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.598 "dma_device_type": 2 00:15:39.598 } 00:15:39.598 ], 00:15:39.598 "driver_specific": {} 00:15:39.598 } 00:15:39.598 ] 00:15:39.598 09:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.598 09:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:39.598 09:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:39.598 09:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.598 09:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.598 09:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.598 09:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.598 09:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:39.598 09:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.598 09:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.598 09:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.598 09:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.598 09:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.598 09:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.598 09:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.598 09:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.598 09:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.598 09:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.598 "name": "Existed_Raid", 00:15:39.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.598 "strip_size_kb": 64, 00:15:39.598 "state": "configuring", 00:15:39.598 "raid_level": "raid5f", 00:15:39.598 "superblock": false, 00:15:39.598 "num_base_bdevs": 4, 00:15:39.598 "num_base_bdevs_discovered": 3, 00:15:39.598 "num_base_bdevs_operational": 4, 00:15:39.598 "base_bdevs_list": [ 00:15:39.598 { 00:15:39.598 "name": "BaseBdev1", 00:15:39.598 "uuid": "13b544be-3be8-4e9c-b330-ff480f698bd9", 00:15:39.598 "is_configured": true, 00:15:39.598 "data_offset": 0, 00:15:39.598 "data_size": 65536 00:15:39.598 }, 00:15:39.598 { 00:15:39.598 "name": null, 00:15:39.598 "uuid": "9691c55d-ed61-466d-bb65-e55e47654a8b", 00:15:39.598 "is_configured": false, 00:15:39.598 "data_offset": 0, 00:15:39.598 "data_size": 65536 00:15:39.598 }, 00:15:39.598 { 00:15:39.598 "name": "BaseBdev3", 00:15:39.598 "uuid": "54f95ba9-83d7-4e68-a9f5-ee8fba66c819", 00:15:39.598 "is_configured": true, 00:15:39.598 "data_offset": 0, 00:15:39.598 "data_size": 65536 00:15:39.598 }, 00:15:39.598 { 00:15:39.598 "name": "BaseBdev4", 00:15:39.598 "uuid": "19059c9d-ad35-47e4-9b2c-202761826cbe", 00:15:39.598 "is_configured": true, 00:15:39.598 "data_offset": 0, 00:15:39.598 "data_size": 65536 00:15:39.598 } 00:15:39.598 ] 00:15:39.598 }' 00:15:39.598 09:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.598 09:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.166 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.166 09:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.166 09:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.166 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:40.166 09:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.166 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:40.166 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:40.166 09:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.166 09:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.166 [2024-12-06 09:53:05.255017] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:40.166 09:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.166 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:40.166 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.166 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.166 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.166 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.166 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:40.166 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.166 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.166 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.166 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.166 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.166 09:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.166 09:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.166 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.166 09:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.166 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.166 "name": "Existed_Raid", 00:15:40.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.166 "strip_size_kb": 64, 00:15:40.166 "state": "configuring", 00:15:40.166 "raid_level": "raid5f", 00:15:40.166 "superblock": false, 00:15:40.166 "num_base_bdevs": 4, 00:15:40.166 "num_base_bdevs_discovered": 2, 00:15:40.166 "num_base_bdevs_operational": 4, 00:15:40.166 "base_bdevs_list": [ 00:15:40.166 { 00:15:40.166 "name": "BaseBdev1", 00:15:40.166 "uuid": "13b544be-3be8-4e9c-b330-ff480f698bd9", 00:15:40.166 "is_configured": true, 00:15:40.166 "data_offset": 0, 00:15:40.166 "data_size": 65536 00:15:40.166 }, 00:15:40.166 { 00:15:40.166 "name": null, 00:15:40.166 "uuid": "9691c55d-ed61-466d-bb65-e55e47654a8b", 00:15:40.166 "is_configured": false, 00:15:40.166 "data_offset": 0, 00:15:40.166 "data_size": 65536 00:15:40.166 }, 00:15:40.166 { 00:15:40.166 "name": null, 00:15:40.166 "uuid": "54f95ba9-83d7-4e68-a9f5-ee8fba66c819", 00:15:40.166 "is_configured": false, 00:15:40.166 "data_offset": 0, 00:15:40.166 "data_size": 65536 00:15:40.166 }, 00:15:40.166 { 00:15:40.166 "name": "BaseBdev4", 00:15:40.166 "uuid": "19059c9d-ad35-47e4-9b2c-202761826cbe", 00:15:40.166 "is_configured": true, 00:15:40.166 "data_offset": 0, 00:15:40.166 "data_size": 65536 00:15:40.166 } 00:15:40.166 ] 00:15:40.166 }' 00:15:40.166 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.166 09:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.425 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.425 09:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.425 09:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.425 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:40.425 09:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.683 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:40.683 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:40.683 09:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.683 09:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.683 [2024-12-06 09:53:05.726201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:40.683 09:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.683 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:40.683 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.683 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.683 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.683 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.683 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:40.683 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.683 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.683 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.683 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.683 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.683 09:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.683 09:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.683 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.683 09:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.683 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.683 "name": "Existed_Raid", 00:15:40.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.683 "strip_size_kb": 64, 00:15:40.683 "state": "configuring", 00:15:40.683 "raid_level": "raid5f", 00:15:40.683 "superblock": false, 00:15:40.683 "num_base_bdevs": 4, 00:15:40.683 "num_base_bdevs_discovered": 3, 00:15:40.683 "num_base_bdevs_operational": 4, 00:15:40.683 "base_bdevs_list": [ 00:15:40.683 { 00:15:40.683 "name": "BaseBdev1", 00:15:40.683 "uuid": "13b544be-3be8-4e9c-b330-ff480f698bd9", 00:15:40.683 "is_configured": true, 00:15:40.683 "data_offset": 0, 00:15:40.683 "data_size": 65536 00:15:40.683 }, 00:15:40.683 { 00:15:40.683 "name": null, 00:15:40.683 "uuid": "9691c55d-ed61-466d-bb65-e55e47654a8b", 00:15:40.683 "is_configured": false, 00:15:40.683 "data_offset": 0, 00:15:40.683 "data_size": 65536 00:15:40.683 }, 00:15:40.683 { 00:15:40.683 "name": "BaseBdev3", 00:15:40.683 "uuid": "54f95ba9-83d7-4e68-a9f5-ee8fba66c819", 00:15:40.683 "is_configured": true, 00:15:40.683 "data_offset": 0, 00:15:40.683 "data_size": 65536 00:15:40.683 }, 00:15:40.683 { 00:15:40.683 "name": "BaseBdev4", 00:15:40.683 "uuid": "19059c9d-ad35-47e4-9b2c-202761826cbe", 00:15:40.683 "is_configured": true, 00:15:40.683 "data_offset": 0, 00:15:40.683 "data_size": 65536 00:15:40.683 } 00:15:40.683 ] 00:15:40.683 }' 00:15:40.683 09:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.683 09:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.943 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.943 09:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.943 09:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.943 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:40.943 09:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.943 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:40.943 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:40.943 09:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.943 09:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.943 [2024-12-06 09:53:06.185444] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:41.201 09:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.201 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:41.201 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.201 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.201 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.201 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.201 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:41.201 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.201 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.201 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.201 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.201 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.201 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.201 09:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.201 09:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.201 09:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.201 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.201 "name": "Existed_Raid", 00:15:41.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.201 "strip_size_kb": 64, 00:15:41.201 "state": "configuring", 00:15:41.201 "raid_level": "raid5f", 00:15:41.201 "superblock": false, 00:15:41.201 "num_base_bdevs": 4, 00:15:41.201 "num_base_bdevs_discovered": 2, 00:15:41.201 "num_base_bdevs_operational": 4, 00:15:41.201 "base_bdevs_list": [ 00:15:41.201 { 00:15:41.201 "name": null, 00:15:41.201 "uuid": "13b544be-3be8-4e9c-b330-ff480f698bd9", 00:15:41.201 "is_configured": false, 00:15:41.201 "data_offset": 0, 00:15:41.201 "data_size": 65536 00:15:41.201 }, 00:15:41.201 { 00:15:41.201 "name": null, 00:15:41.201 "uuid": "9691c55d-ed61-466d-bb65-e55e47654a8b", 00:15:41.201 "is_configured": false, 00:15:41.201 "data_offset": 0, 00:15:41.201 "data_size": 65536 00:15:41.201 }, 00:15:41.201 { 00:15:41.202 "name": "BaseBdev3", 00:15:41.202 "uuid": "54f95ba9-83d7-4e68-a9f5-ee8fba66c819", 00:15:41.202 "is_configured": true, 00:15:41.202 "data_offset": 0, 00:15:41.202 "data_size": 65536 00:15:41.202 }, 00:15:41.202 { 00:15:41.202 "name": "BaseBdev4", 00:15:41.202 "uuid": "19059c9d-ad35-47e4-9b2c-202761826cbe", 00:15:41.202 "is_configured": true, 00:15:41.202 "data_offset": 0, 00:15:41.202 "data_size": 65536 00:15:41.202 } 00:15:41.202 ] 00:15:41.202 }' 00:15:41.202 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.202 09:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.461 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.461 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:41.461 09:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.461 09:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.721 09:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.721 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:41.721 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:41.721 09:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.721 09:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.721 [2024-12-06 09:53:06.768779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:41.721 09:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.721 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:41.721 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.721 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.721 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.721 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.721 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:41.721 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.721 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.721 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.721 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.721 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.721 09:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.721 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.721 09:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.721 09:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.721 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.721 "name": "Existed_Raid", 00:15:41.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.721 "strip_size_kb": 64, 00:15:41.721 "state": "configuring", 00:15:41.721 "raid_level": "raid5f", 00:15:41.721 "superblock": false, 00:15:41.721 "num_base_bdevs": 4, 00:15:41.721 "num_base_bdevs_discovered": 3, 00:15:41.721 "num_base_bdevs_operational": 4, 00:15:41.721 "base_bdevs_list": [ 00:15:41.721 { 00:15:41.721 "name": null, 00:15:41.721 "uuid": "13b544be-3be8-4e9c-b330-ff480f698bd9", 00:15:41.721 "is_configured": false, 00:15:41.721 "data_offset": 0, 00:15:41.721 "data_size": 65536 00:15:41.721 }, 00:15:41.721 { 00:15:41.721 "name": "BaseBdev2", 00:15:41.721 "uuid": "9691c55d-ed61-466d-bb65-e55e47654a8b", 00:15:41.721 "is_configured": true, 00:15:41.721 "data_offset": 0, 00:15:41.721 "data_size": 65536 00:15:41.721 }, 00:15:41.721 { 00:15:41.721 "name": "BaseBdev3", 00:15:41.721 "uuid": "54f95ba9-83d7-4e68-a9f5-ee8fba66c819", 00:15:41.721 "is_configured": true, 00:15:41.721 "data_offset": 0, 00:15:41.721 "data_size": 65536 00:15:41.721 }, 00:15:41.721 { 00:15:41.721 "name": "BaseBdev4", 00:15:41.721 "uuid": "19059c9d-ad35-47e4-9b2c-202761826cbe", 00:15:41.721 "is_configured": true, 00:15:41.721 "data_offset": 0, 00:15:41.721 "data_size": 65536 00:15:41.721 } 00:15:41.721 ] 00:15:41.721 }' 00:15:41.721 09:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.721 09:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.981 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:41.981 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.981 09:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.981 09:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.981 09:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.981 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:41.981 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.981 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:41.981 09:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.981 09:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.258 09:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.258 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 13b544be-3be8-4e9c-b330-ff480f698bd9 00:15:42.258 09:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.258 09:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.258 [2024-12-06 09:53:07.327276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:42.258 [2024-12-06 09:53:07.327386] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:42.258 [2024-12-06 09:53:07.327410] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:42.258 [2024-12-06 09:53:07.327696] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:42.258 [2024-12-06 09:53:07.334348] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:42.258 [2024-12-06 09:53:07.334408] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:42.258 [2024-12-06 09:53:07.334711] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.258 NewBaseBdev 00:15:42.258 09:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.258 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:42.258 09:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:42.258 09:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:42.258 09:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:42.258 09:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:42.258 09:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:42.258 09:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:42.259 09:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.259 09:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.259 09:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.259 09:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:42.259 09:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.259 09:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.259 [ 00:15:42.259 { 00:15:42.259 "name": "NewBaseBdev", 00:15:42.259 "aliases": [ 00:15:42.259 "13b544be-3be8-4e9c-b330-ff480f698bd9" 00:15:42.259 ], 00:15:42.259 "product_name": "Malloc disk", 00:15:42.259 "block_size": 512, 00:15:42.259 "num_blocks": 65536, 00:15:42.259 "uuid": "13b544be-3be8-4e9c-b330-ff480f698bd9", 00:15:42.259 "assigned_rate_limits": { 00:15:42.259 "rw_ios_per_sec": 0, 00:15:42.259 "rw_mbytes_per_sec": 0, 00:15:42.259 "r_mbytes_per_sec": 0, 00:15:42.259 "w_mbytes_per_sec": 0 00:15:42.259 }, 00:15:42.259 "claimed": true, 00:15:42.259 "claim_type": "exclusive_write", 00:15:42.259 "zoned": false, 00:15:42.259 "supported_io_types": { 00:15:42.259 "read": true, 00:15:42.259 "write": true, 00:15:42.259 "unmap": true, 00:15:42.259 "flush": true, 00:15:42.259 "reset": true, 00:15:42.259 "nvme_admin": false, 00:15:42.259 "nvme_io": false, 00:15:42.259 "nvme_io_md": false, 00:15:42.259 "write_zeroes": true, 00:15:42.259 "zcopy": true, 00:15:42.259 "get_zone_info": false, 00:15:42.259 "zone_management": false, 00:15:42.259 "zone_append": false, 00:15:42.259 "compare": false, 00:15:42.259 "compare_and_write": false, 00:15:42.259 "abort": true, 00:15:42.259 "seek_hole": false, 00:15:42.259 "seek_data": false, 00:15:42.259 "copy": true, 00:15:42.259 "nvme_iov_md": false 00:15:42.259 }, 00:15:42.259 "memory_domains": [ 00:15:42.259 { 00:15:42.259 "dma_device_id": "system", 00:15:42.259 "dma_device_type": 1 00:15:42.259 }, 00:15:42.259 { 00:15:42.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.259 "dma_device_type": 2 00:15:42.259 } 00:15:42.259 ], 00:15:42.259 "driver_specific": {} 00:15:42.259 } 00:15:42.259 ] 00:15:42.259 09:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.259 09:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:42.259 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:42.259 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.259 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.259 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.259 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.259 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:42.259 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.259 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.259 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.259 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.259 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.259 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.259 09:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.259 09:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.259 09:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.259 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.259 "name": "Existed_Raid", 00:15:42.259 "uuid": "eb24fadb-a6f4-46fc-823e-df5d02df4e60", 00:15:42.259 "strip_size_kb": 64, 00:15:42.259 "state": "online", 00:15:42.259 "raid_level": "raid5f", 00:15:42.259 "superblock": false, 00:15:42.259 "num_base_bdevs": 4, 00:15:42.259 "num_base_bdevs_discovered": 4, 00:15:42.259 "num_base_bdevs_operational": 4, 00:15:42.259 "base_bdevs_list": [ 00:15:42.259 { 00:15:42.259 "name": "NewBaseBdev", 00:15:42.259 "uuid": "13b544be-3be8-4e9c-b330-ff480f698bd9", 00:15:42.259 "is_configured": true, 00:15:42.259 "data_offset": 0, 00:15:42.259 "data_size": 65536 00:15:42.259 }, 00:15:42.259 { 00:15:42.259 "name": "BaseBdev2", 00:15:42.259 "uuid": "9691c55d-ed61-466d-bb65-e55e47654a8b", 00:15:42.259 "is_configured": true, 00:15:42.259 "data_offset": 0, 00:15:42.259 "data_size": 65536 00:15:42.259 }, 00:15:42.259 { 00:15:42.259 "name": "BaseBdev3", 00:15:42.259 "uuid": "54f95ba9-83d7-4e68-a9f5-ee8fba66c819", 00:15:42.259 "is_configured": true, 00:15:42.259 "data_offset": 0, 00:15:42.259 "data_size": 65536 00:15:42.259 }, 00:15:42.259 { 00:15:42.259 "name": "BaseBdev4", 00:15:42.259 "uuid": "19059c9d-ad35-47e4-9b2c-202761826cbe", 00:15:42.259 "is_configured": true, 00:15:42.259 "data_offset": 0, 00:15:42.259 "data_size": 65536 00:15:42.259 } 00:15:42.259 ] 00:15:42.259 }' 00:15:42.259 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.259 09:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.828 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:42.828 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:42.828 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:42.828 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:42.828 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:42.828 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:42.828 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:42.828 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:42.828 09:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.828 09:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.828 [2024-12-06 09:53:07.842213] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:42.828 09:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.828 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:42.828 "name": "Existed_Raid", 00:15:42.828 "aliases": [ 00:15:42.828 "eb24fadb-a6f4-46fc-823e-df5d02df4e60" 00:15:42.828 ], 00:15:42.828 "product_name": "Raid Volume", 00:15:42.828 "block_size": 512, 00:15:42.828 "num_blocks": 196608, 00:15:42.828 "uuid": "eb24fadb-a6f4-46fc-823e-df5d02df4e60", 00:15:42.828 "assigned_rate_limits": { 00:15:42.828 "rw_ios_per_sec": 0, 00:15:42.828 "rw_mbytes_per_sec": 0, 00:15:42.828 "r_mbytes_per_sec": 0, 00:15:42.828 "w_mbytes_per_sec": 0 00:15:42.828 }, 00:15:42.828 "claimed": false, 00:15:42.828 "zoned": false, 00:15:42.828 "supported_io_types": { 00:15:42.828 "read": true, 00:15:42.828 "write": true, 00:15:42.828 "unmap": false, 00:15:42.828 "flush": false, 00:15:42.828 "reset": true, 00:15:42.828 "nvme_admin": false, 00:15:42.828 "nvme_io": false, 00:15:42.828 "nvme_io_md": false, 00:15:42.828 "write_zeroes": true, 00:15:42.828 "zcopy": false, 00:15:42.828 "get_zone_info": false, 00:15:42.828 "zone_management": false, 00:15:42.828 "zone_append": false, 00:15:42.828 "compare": false, 00:15:42.828 "compare_and_write": false, 00:15:42.828 "abort": false, 00:15:42.828 "seek_hole": false, 00:15:42.828 "seek_data": false, 00:15:42.828 "copy": false, 00:15:42.828 "nvme_iov_md": false 00:15:42.828 }, 00:15:42.828 "driver_specific": { 00:15:42.828 "raid": { 00:15:42.828 "uuid": "eb24fadb-a6f4-46fc-823e-df5d02df4e60", 00:15:42.828 "strip_size_kb": 64, 00:15:42.828 "state": "online", 00:15:42.828 "raid_level": "raid5f", 00:15:42.828 "superblock": false, 00:15:42.828 "num_base_bdevs": 4, 00:15:42.828 "num_base_bdevs_discovered": 4, 00:15:42.829 "num_base_bdevs_operational": 4, 00:15:42.829 "base_bdevs_list": [ 00:15:42.829 { 00:15:42.829 "name": "NewBaseBdev", 00:15:42.829 "uuid": "13b544be-3be8-4e9c-b330-ff480f698bd9", 00:15:42.829 "is_configured": true, 00:15:42.829 "data_offset": 0, 00:15:42.829 "data_size": 65536 00:15:42.829 }, 00:15:42.829 { 00:15:42.829 "name": "BaseBdev2", 00:15:42.829 "uuid": "9691c55d-ed61-466d-bb65-e55e47654a8b", 00:15:42.829 "is_configured": true, 00:15:42.829 "data_offset": 0, 00:15:42.829 "data_size": 65536 00:15:42.829 }, 00:15:42.829 { 00:15:42.829 "name": "BaseBdev3", 00:15:42.829 "uuid": "54f95ba9-83d7-4e68-a9f5-ee8fba66c819", 00:15:42.829 "is_configured": true, 00:15:42.829 "data_offset": 0, 00:15:42.829 "data_size": 65536 00:15:42.829 }, 00:15:42.829 { 00:15:42.829 "name": "BaseBdev4", 00:15:42.829 "uuid": "19059c9d-ad35-47e4-9b2c-202761826cbe", 00:15:42.829 "is_configured": true, 00:15:42.829 "data_offset": 0, 00:15:42.829 "data_size": 65536 00:15:42.829 } 00:15:42.829 ] 00:15:42.829 } 00:15:42.829 } 00:15:42.829 }' 00:15:42.829 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:42.829 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:42.829 BaseBdev2 00:15:42.829 BaseBdev3 00:15:42.829 BaseBdev4' 00:15:42.829 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.829 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:42.829 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:42.829 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:42.829 09:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.829 09:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.829 09:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.829 09:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.829 09:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:42.829 09:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:42.829 09:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:42.829 09:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:42.829 09:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.829 09:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.829 09:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.829 09:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.829 09:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:42.829 09:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:42.829 09:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:42.829 09:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.829 09:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:42.829 09:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.829 09:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.088 09:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.088 09:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:43.088 09:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:43.088 09:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:43.088 09:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:43.088 09:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:43.088 09:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.088 09:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.088 09:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.088 09:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:43.088 09:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:43.088 09:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:43.088 09:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.088 09:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.088 [2024-12-06 09:53:08.161406] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:43.088 [2024-12-06 09:53:08.161434] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:43.088 [2024-12-06 09:53:08.161501] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:43.088 [2024-12-06 09:53:08.161783] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:43.089 [2024-12-06 09:53:08.161793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:43.089 09:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.089 09:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82634 00:15:43.089 09:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82634 ']' 00:15:43.089 09:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82634 00:15:43.089 09:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:43.089 09:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:43.089 09:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82634 00:15:43.089 09:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:43.089 09:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:43.089 killing process with pid 82634 00:15:43.089 09:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82634' 00:15:43.089 09:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 82634 00:15:43.089 [2024-12-06 09:53:08.208034] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:43.089 09:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 82634 00:15:43.347 [2024-12-06 09:53:08.592851] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:44.727 ************************************ 00:15:44.727 END TEST raid5f_state_function_test 00:15:44.727 ************************************ 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:44.727 00:15:44.727 real 0m11.439s 00:15:44.727 user 0m18.208s 00:15:44.727 sys 0m2.056s 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.727 09:53:09 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:15:44.727 09:53:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:44.727 09:53:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:44.727 09:53:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:44.727 ************************************ 00:15:44.727 START TEST raid5f_state_function_test_sb 00:15:44.727 ************************************ 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:44.727 Process raid pid: 83306 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83306 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83306' 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83306 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83306 ']' 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:44.727 09:53:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.727 [2024-12-06 09:53:09.873800] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:15:44.727 [2024-12-06 09:53:09.874021] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.986 [2024-12-06 09:53:10.036832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.986 [2024-12-06 09:53:10.146469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.245 [2024-12-06 09:53:10.341936] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:45.245 [2024-12-06 09:53:10.342048] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:45.504 09:53:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:45.504 09:53:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:45.504 09:53:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:45.504 09:53:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.504 09:53:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.504 [2024-12-06 09:53:10.691055] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:45.504 [2024-12-06 09:53:10.691177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:45.504 [2024-12-06 09:53:10.691192] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:45.504 [2024-12-06 09:53:10.691203] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:45.504 [2024-12-06 09:53:10.691209] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:45.504 [2024-12-06 09:53:10.691218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:45.504 [2024-12-06 09:53:10.691223] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:45.504 [2024-12-06 09:53:10.691232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:45.504 09:53:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.504 09:53:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:45.504 09:53:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.504 09:53:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.504 09:53:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.504 09:53:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.504 09:53:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:45.504 09:53:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.504 09:53:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.504 09:53:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.504 09:53:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.504 09:53:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.504 09:53:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.504 09:53:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.504 09:53:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.504 09:53:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.504 09:53:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.504 "name": "Existed_Raid", 00:15:45.504 "uuid": "6f776e57-74e2-4a12-a924-5f29fbf5319d", 00:15:45.504 "strip_size_kb": 64, 00:15:45.504 "state": "configuring", 00:15:45.504 "raid_level": "raid5f", 00:15:45.504 "superblock": true, 00:15:45.504 "num_base_bdevs": 4, 00:15:45.504 "num_base_bdevs_discovered": 0, 00:15:45.504 "num_base_bdevs_operational": 4, 00:15:45.505 "base_bdevs_list": [ 00:15:45.505 { 00:15:45.505 "name": "BaseBdev1", 00:15:45.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.505 "is_configured": false, 00:15:45.505 "data_offset": 0, 00:15:45.505 "data_size": 0 00:15:45.505 }, 00:15:45.505 { 00:15:45.505 "name": "BaseBdev2", 00:15:45.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.505 "is_configured": false, 00:15:45.505 "data_offset": 0, 00:15:45.505 "data_size": 0 00:15:45.505 }, 00:15:45.505 { 00:15:45.505 "name": "BaseBdev3", 00:15:45.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.505 "is_configured": false, 00:15:45.505 "data_offset": 0, 00:15:45.505 "data_size": 0 00:15:45.505 }, 00:15:45.505 { 00:15:45.505 "name": "BaseBdev4", 00:15:45.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.505 "is_configured": false, 00:15:45.505 "data_offset": 0, 00:15:45.505 "data_size": 0 00:15:45.505 } 00:15:45.505 ] 00:15:45.505 }' 00:15:45.505 09:53:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.505 09:53:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.073 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:46.073 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.073 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.073 [2024-12-06 09:53:11.142250] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:46.073 [2024-12-06 09:53:11.142347] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:46.073 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.073 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:46.073 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.073 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.073 [2024-12-06 09:53:11.154252] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:46.073 [2024-12-06 09:53:11.154328] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:46.073 [2024-12-06 09:53:11.154355] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:46.073 [2024-12-06 09:53:11.154377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:46.073 [2024-12-06 09:53:11.154394] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:46.073 [2024-12-06 09:53:11.154415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:46.073 [2024-12-06 09:53:11.154432] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:46.073 [2024-12-06 09:53:11.154451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:46.073 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.073 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:46.073 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.073 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.073 [2024-12-06 09:53:11.200827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:46.073 BaseBdev1 00:15:46.073 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.073 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:46.073 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:46.073 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:46.073 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:46.073 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:46.073 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:46.074 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:46.074 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.074 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.074 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.074 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:46.074 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.074 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.074 [ 00:15:46.074 { 00:15:46.074 "name": "BaseBdev1", 00:15:46.074 "aliases": [ 00:15:46.074 "a07ea55d-bba1-423a-831c-32d122440897" 00:15:46.074 ], 00:15:46.074 "product_name": "Malloc disk", 00:15:46.074 "block_size": 512, 00:15:46.074 "num_blocks": 65536, 00:15:46.074 "uuid": "a07ea55d-bba1-423a-831c-32d122440897", 00:15:46.074 "assigned_rate_limits": { 00:15:46.074 "rw_ios_per_sec": 0, 00:15:46.074 "rw_mbytes_per_sec": 0, 00:15:46.074 "r_mbytes_per_sec": 0, 00:15:46.074 "w_mbytes_per_sec": 0 00:15:46.074 }, 00:15:46.074 "claimed": true, 00:15:46.074 "claim_type": "exclusive_write", 00:15:46.074 "zoned": false, 00:15:46.074 "supported_io_types": { 00:15:46.074 "read": true, 00:15:46.074 "write": true, 00:15:46.074 "unmap": true, 00:15:46.074 "flush": true, 00:15:46.074 "reset": true, 00:15:46.074 "nvme_admin": false, 00:15:46.074 "nvme_io": false, 00:15:46.074 "nvme_io_md": false, 00:15:46.074 "write_zeroes": true, 00:15:46.074 "zcopy": true, 00:15:46.074 "get_zone_info": false, 00:15:46.074 "zone_management": false, 00:15:46.074 "zone_append": false, 00:15:46.074 "compare": false, 00:15:46.074 "compare_and_write": false, 00:15:46.074 "abort": true, 00:15:46.074 "seek_hole": false, 00:15:46.074 "seek_data": false, 00:15:46.074 "copy": true, 00:15:46.074 "nvme_iov_md": false 00:15:46.074 }, 00:15:46.074 "memory_domains": [ 00:15:46.074 { 00:15:46.074 "dma_device_id": "system", 00:15:46.074 "dma_device_type": 1 00:15:46.074 }, 00:15:46.074 { 00:15:46.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.074 "dma_device_type": 2 00:15:46.074 } 00:15:46.074 ], 00:15:46.074 "driver_specific": {} 00:15:46.074 } 00:15:46.074 ] 00:15:46.074 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.074 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:46.074 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:46.074 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.074 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.074 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.074 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.074 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:46.074 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.074 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.074 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.074 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.074 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.074 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.074 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.074 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.074 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.074 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.074 "name": "Existed_Raid", 00:15:46.074 "uuid": "4f1197af-9e3a-40ae-93d9-191638fab459", 00:15:46.074 "strip_size_kb": 64, 00:15:46.074 "state": "configuring", 00:15:46.074 "raid_level": "raid5f", 00:15:46.074 "superblock": true, 00:15:46.074 "num_base_bdevs": 4, 00:15:46.074 "num_base_bdevs_discovered": 1, 00:15:46.074 "num_base_bdevs_operational": 4, 00:15:46.074 "base_bdevs_list": [ 00:15:46.074 { 00:15:46.074 "name": "BaseBdev1", 00:15:46.074 "uuid": "a07ea55d-bba1-423a-831c-32d122440897", 00:15:46.074 "is_configured": true, 00:15:46.074 "data_offset": 2048, 00:15:46.074 "data_size": 63488 00:15:46.074 }, 00:15:46.074 { 00:15:46.074 "name": "BaseBdev2", 00:15:46.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.074 "is_configured": false, 00:15:46.074 "data_offset": 0, 00:15:46.074 "data_size": 0 00:15:46.074 }, 00:15:46.074 { 00:15:46.074 "name": "BaseBdev3", 00:15:46.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.074 "is_configured": false, 00:15:46.074 "data_offset": 0, 00:15:46.074 "data_size": 0 00:15:46.074 }, 00:15:46.074 { 00:15:46.074 "name": "BaseBdev4", 00:15:46.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.074 "is_configured": false, 00:15:46.074 "data_offset": 0, 00:15:46.074 "data_size": 0 00:15:46.074 } 00:15:46.074 ] 00:15:46.074 }' 00:15:46.074 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.074 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.643 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:46.643 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.643 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.643 [2024-12-06 09:53:11.680038] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:46.643 [2024-12-06 09:53:11.680091] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:46.643 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.643 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:46.643 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.643 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.643 [2024-12-06 09:53:11.692068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:46.643 [2024-12-06 09:53:11.693895] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:46.643 [2024-12-06 09:53:11.693991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:46.643 [2024-12-06 09:53:11.694006] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:46.643 [2024-12-06 09:53:11.694018] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:46.643 [2024-12-06 09:53:11.694025] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:46.643 [2024-12-06 09:53:11.694033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:46.643 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.643 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:46.643 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:46.643 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:46.643 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.643 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.643 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.643 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.643 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:46.643 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.643 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.643 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.643 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.643 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.643 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.643 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.643 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.643 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.643 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.643 "name": "Existed_Raid", 00:15:46.643 "uuid": "1a90194a-9f96-48dd-9af8-2fa817edf469", 00:15:46.643 "strip_size_kb": 64, 00:15:46.643 "state": "configuring", 00:15:46.643 "raid_level": "raid5f", 00:15:46.643 "superblock": true, 00:15:46.643 "num_base_bdevs": 4, 00:15:46.643 "num_base_bdevs_discovered": 1, 00:15:46.643 "num_base_bdevs_operational": 4, 00:15:46.643 "base_bdevs_list": [ 00:15:46.643 { 00:15:46.643 "name": "BaseBdev1", 00:15:46.643 "uuid": "a07ea55d-bba1-423a-831c-32d122440897", 00:15:46.643 "is_configured": true, 00:15:46.643 "data_offset": 2048, 00:15:46.643 "data_size": 63488 00:15:46.643 }, 00:15:46.643 { 00:15:46.643 "name": "BaseBdev2", 00:15:46.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.643 "is_configured": false, 00:15:46.643 "data_offset": 0, 00:15:46.643 "data_size": 0 00:15:46.643 }, 00:15:46.643 { 00:15:46.643 "name": "BaseBdev3", 00:15:46.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.643 "is_configured": false, 00:15:46.643 "data_offset": 0, 00:15:46.643 "data_size": 0 00:15:46.643 }, 00:15:46.643 { 00:15:46.643 "name": "BaseBdev4", 00:15:46.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.643 "is_configured": false, 00:15:46.643 "data_offset": 0, 00:15:46.643 "data_size": 0 00:15:46.643 } 00:15:46.643 ] 00:15:46.643 }' 00:15:46.643 09:53:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.643 09:53:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.902 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:46.902 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.902 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.902 [2024-12-06 09:53:12.085518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:46.902 BaseBdev2 00:15:46.902 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.902 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:46.902 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:46.902 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:46.902 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:46.902 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:46.902 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:46.902 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:46.902 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.902 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.902 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.902 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:46.902 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.902 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.902 [ 00:15:46.902 { 00:15:46.902 "name": "BaseBdev2", 00:15:46.902 "aliases": [ 00:15:46.902 "81eefe9d-8690-4860-9f03-51e4612af153" 00:15:46.902 ], 00:15:46.902 "product_name": "Malloc disk", 00:15:46.902 "block_size": 512, 00:15:46.902 "num_blocks": 65536, 00:15:46.902 "uuid": "81eefe9d-8690-4860-9f03-51e4612af153", 00:15:46.902 "assigned_rate_limits": { 00:15:46.902 "rw_ios_per_sec": 0, 00:15:46.902 "rw_mbytes_per_sec": 0, 00:15:46.902 "r_mbytes_per_sec": 0, 00:15:46.903 "w_mbytes_per_sec": 0 00:15:46.903 }, 00:15:46.903 "claimed": true, 00:15:46.903 "claim_type": "exclusive_write", 00:15:46.903 "zoned": false, 00:15:46.903 "supported_io_types": { 00:15:46.903 "read": true, 00:15:46.903 "write": true, 00:15:46.903 "unmap": true, 00:15:46.903 "flush": true, 00:15:46.903 "reset": true, 00:15:46.903 "nvme_admin": false, 00:15:46.903 "nvme_io": false, 00:15:46.903 "nvme_io_md": false, 00:15:46.903 "write_zeroes": true, 00:15:46.903 "zcopy": true, 00:15:46.903 "get_zone_info": false, 00:15:46.903 "zone_management": false, 00:15:46.903 "zone_append": false, 00:15:46.903 "compare": false, 00:15:46.903 "compare_and_write": false, 00:15:46.903 "abort": true, 00:15:46.903 "seek_hole": false, 00:15:46.903 "seek_data": false, 00:15:46.903 "copy": true, 00:15:46.903 "nvme_iov_md": false 00:15:46.903 }, 00:15:46.903 "memory_domains": [ 00:15:46.903 { 00:15:46.903 "dma_device_id": "system", 00:15:46.903 "dma_device_type": 1 00:15:46.903 }, 00:15:46.903 { 00:15:46.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.903 "dma_device_type": 2 00:15:46.903 } 00:15:46.903 ], 00:15:46.903 "driver_specific": {} 00:15:46.903 } 00:15:46.903 ] 00:15:46.903 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.903 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:46.903 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:46.903 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:46.903 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:46.903 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.903 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.903 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.903 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.903 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:46.903 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.903 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.903 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.903 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.903 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.903 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.903 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.903 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.903 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.162 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.162 "name": "Existed_Raid", 00:15:47.162 "uuid": "1a90194a-9f96-48dd-9af8-2fa817edf469", 00:15:47.162 "strip_size_kb": 64, 00:15:47.162 "state": "configuring", 00:15:47.162 "raid_level": "raid5f", 00:15:47.162 "superblock": true, 00:15:47.162 "num_base_bdevs": 4, 00:15:47.162 "num_base_bdevs_discovered": 2, 00:15:47.162 "num_base_bdevs_operational": 4, 00:15:47.162 "base_bdevs_list": [ 00:15:47.162 { 00:15:47.162 "name": "BaseBdev1", 00:15:47.162 "uuid": "a07ea55d-bba1-423a-831c-32d122440897", 00:15:47.162 "is_configured": true, 00:15:47.162 "data_offset": 2048, 00:15:47.162 "data_size": 63488 00:15:47.162 }, 00:15:47.162 { 00:15:47.163 "name": "BaseBdev2", 00:15:47.163 "uuid": "81eefe9d-8690-4860-9f03-51e4612af153", 00:15:47.163 "is_configured": true, 00:15:47.163 "data_offset": 2048, 00:15:47.163 "data_size": 63488 00:15:47.163 }, 00:15:47.163 { 00:15:47.163 "name": "BaseBdev3", 00:15:47.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.163 "is_configured": false, 00:15:47.163 "data_offset": 0, 00:15:47.163 "data_size": 0 00:15:47.163 }, 00:15:47.163 { 00:15:47.163 "name": "BaseBdev4", 00:15:47.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.163 "is_configured": false, 00:15:47.163 "data_offset": 0, 00:15:47.163 "data_size": 0 00:15:47.163 } 00:15:47.163 ] 00:15:47.163 }' 00:15:47.163 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.163 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.422 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:47.422 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.422 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.422 [2024-12-06 09:53:12.654615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:47.422 BaseBdev3 00:15:47.422 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.422 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:47.422 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:47.422 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:47.422 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:47.422 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:47.422 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:47.422 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:47.422 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.422 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.422 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.422 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:47.422 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.422 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.422 [ 00:15:47.422 { 00:15:47.422 "name": "BaseBdev3", 00:15:47.422 "aliases": [ 00:15:47.422 "bd4ba012-87de-4fd5-bafa-6d6e620b075e" 00:15:47.422 ], 00:15:47.422 "product_name": "Malloc disk", 00:15:47.422 "block_size": 512, 00:15:47.422 "num_blocks": 65536, 00:15:47.422 "uuid": "bd4ba012-87de-4fd5-bafa-6d6e620b075e", 00:15:47.422 "assigned_rate_limits": { 00:15:47.422 "rw_ios_per_sec": 0, 00:15:47.422 "rw_mbytes_per_sec": 0, 00:15:47.422 "r_mbytes_per_sec": 0, 00:15:47.422 "w_mbytes_per_sec": 0 00:15:47.422 }, 00:15:47.422 "claimed": true, 00:15:47.422 "claim_type": "exclusive_write", 00:15:47.422 "zoned": false, 00:15:47.422 "supported_io_types": { 00:15:47.422 "read": true, 00:15:47.422 "write": true, 00:15:47.422 "unmap": true, 00:15:47.422 "flush": true, 00:15:47.422 "reset": true, 00:15:47.422 "nvme_admin": false, 00:15:47.422 "nvme_io": false, 00:15:47.422 "nvme_io_md": false, 00:15:47.422 "write_zeroes": true, 00:15:47.422 "zcopy": true, 00:15:47.422 "get_zone_info": false, 00:15:47.422 "zone_management": false, 00:15:47.422 "zone_append": false, 00:15:47.422 "compare": false, 00:15:47.422 "compare_and_write": false, 00:15:47.422 "abort": true, 00:15:47.422 "seek_hole": false, 00:15:47.422 "seek_data": false, 00:15:47.422 "copy": true, 00:15:47.422 "nvme_iov_md": false 00:15:47.422 }, 00:15:47.422 "memory_domains": [ 00:15:47.422 { 00:15:47.422 "dma_device_id": "system", 00:15:47.422 "dma_device_type": 1 00:15:47.422 }, 00:15:47.422 { 00:15:47.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.422 "dma_device_type": 2 00:15:47.422 } 00:15:47.422 ], 00:15:47.422 "driver_specific": {} 00:15:47.422 } 00:15:47.422 ] 00:15:47.422 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.422 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:47.682 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:47.682 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:47.682 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:47.682 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.682 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:47.682 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.682 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.682 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:47.682 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.682 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.682 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.682 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.682 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.682 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.682 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.682 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.682 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.682 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.682 "name": "Existed_Raid", 00:15:47.682 "uuid": "1a90194a-9f96-48dd-9af8-2fa817edf469", 00:15:47.682 "strip_size_kb": 64, 00:15:47.682 "state": "configuring", 00:15:47.682 "raid_level": "raid5f", 00:15:47.682 "superblock": true, 00:15:47.682 "num_base_bdevs": 4, 00:15:47.682 "num_base_bdevs_discovered": 3, 00:15:47.682 "num_base_bdevs_operational": 4, 00:15:47.682 "base_bdevs_list": [ 00:15:47.682 { 00:15:47.682 "name": "BaseBdev1", 00:15:47.682 "uuid": "a07ea55d-bba1-423a-831c-32d122440897", 00:15:47.683 "is_configured": true, 00:15:47.683 "data_offset": 2048, 00:15:47.683 "data_size": 63488 00:15:47.683 }, 00:15:47.683 { 00:15:47.683 "name": "BaseBdev2", 00:15:47.683 "uuid": "81eefe9d-8690-4860-9f03-51e4612af153", 00:15:47.683 "is_configured": true, 00:15:47.683 "data_offset": 2048, 00:15:47.683 "data_size": 63488 00:15:47.683 }, 00:15:47.683 { 00:15:47.683 "name": "BaseBdev3", 00:15:47.683 "uuid": "bd4ba012-87de-4fd5-bafa-6d6e620b075e", 00:15:47.683 "is_configured": true, 00:15:47.683 "data_offset": 2048, 00:15:47.683 "data_size": 63488 00:15:47.683 }, 00:15:47.683 { 00:15:47.683 "name": "BaseBdev4", 00:15:47.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.683 "is_configured": false, 00:15:47.683 "data_offset": 0, 00:15:47.683 "data_size": 0 00:15:47.683 } 00:15:47.683 ] 00:15:47.683 }' 00:15:47.683 09:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.683 09:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.942 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:47.942 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.942 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.942 [2024-12-06 09:53:13.146218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:47.942 [2024-12-06 09:53:13.146594] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:47.942 [2024-12-06 09:53:13.146648] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:47.942 [2024-12-06 09:53:13.146932] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:47.942 BaseBdev4 00:15:47.942 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.942 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:47.942 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:47.942 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:47.942 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:47.942 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:47.942 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:47.942 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:47.942 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.942 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.942 [2024-12-06 09:53:13.153708] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:47.942 [2024-12-06 09:53:13.153768] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:47.942 [2024-12-06 09:53:13.153985] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.942 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.942 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:47.942 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.942 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.942 [ 00:15:47.942 { 00:15:47.942 "name": "BaseBdev4", 00:15:47.942 "aliases": [ 00:15:47.942 "89d13f65-12ca-4238-9e83-945de078612f" 00:15:47.942 ], 00:15:47.942 "product_name": "Malloc disk", 00:15:47.942 "block_size": 512, 00:15:47.942 "num_blocks": 65536, 00:15:47.942 "uuid": "89d13f65-12ca-4238-9e83-945de078612f", 00:15:47.942 "assigned_rate_limits": { 00:15:47.942 "rw_ios_per_sec": 0, 00:15:47.942 "rw_mbytes_per_sec": 0, 00:15:47.942 "r_mbytes_per_sec": 0, 00:15:47.942 "w_mbytes_per_sec": 0 00:15:47.942 }, 00:15:47.942 "claimed": true, 00:15:47.942 "claim_type": "exclusive_write", 00:15:47.942 "zoned": false, 00:15:47.942 "supported_io_types": { 00:15:47.942 "read": true, 00:15:47.942 "write": true, 00:15:47.942 "unmap": true, 00:15:47.942 "flush": true, 00:15:47.942 "reset": true, 00:15:47.942 "nvme_admin": false, 00:15:47.942 "nvme_io": false, 00:15:47.942 "nvme_io_md": false, 00:15:47.942 "write_zeroes": true, 00:15:47.942 "zcopy": true, 00:15:47.942 "get_zone_info": false, 00:15:47.942 "zone_management": false, 00:15:47.942 "zone_append": false, 00:15:47.942 "compare": false, 00:15:47.942 "compare_and_write": false, 00:15:47.942 "abort": true, 00:15:47.942 "seek_hole": false, 00:15:47.942 "seek_data": false, 00:15:47.942 "copy": true, 00:15:47.942 "nvme_iov_md": false 00:15:47.942 }, 00:15:47.942 "memory_domains": [ 00:15:47.942 { 00:15:47.942 "dma_device_id": "system", 00:15:47.942 "dma_device_type": 1 00:15:47.942 }, 00:15:47.942 { 00:15:47.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.942 "dma_device_type": 2 00:15:47.942 } 00:15:47.942 ], 00:15:47.942 "driver_specific": {} 00:15:47.942 } 00:15:47.942 ] 00:15:47.942 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.942 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:47.942 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:47.942 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:47.942 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:47.942 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.942 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.942 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.943 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.943 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:47.943 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.943 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.943 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.943 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.943 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.943 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.943 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.943 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.202 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.202 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.202 "name": "Existed_Raid", 00:15:48.202 "uuid": "1a90194a-9f96-48dd-9af8-2fa817edf469", 00:15:48.202 "strip_size_kb": 64, 00:15:48.202 "state": "online", 00:15:48.202 "raid_level": "raid5f", 00:15:48.202 "superblock": true, 00:15:48.202 "num_base_bdevs": 4, 00:15:48.202 "num_base_bdevs_discovered": 4, 00:15:48.202 "num_base_bdevs_operational": 4, 00:15:48.202 "base_bdevs_list": [ 00:15:48.202 { 00:15:48.202 "name": "BaseBdev1", 00:15:48.202 "uuid": "a07ea55d-bba1-423a-831c-32d122440897", 00:15:48.202 "is_configured": true, 00:15:48.202 "data_offset": 2048, 00:15:48.202 "data_size": 63488 00:15:48.202 }, 00:15:48.202 { 00:15:48.202 "name": "BaseBdev2", 00:15:48.202 "uuid": "81eefe9d-8690-4860-9f03-51e4612af153", 00:15:48.202 "is_configured": true, 00:15:48.202 "data_offset": 2048, 00:15:48.202 "data_size": 63488 00:15:48.202 }, 00:15:48.202 { 00:15:48.202 "name": "BaseBdev3", 00:15:48.202 "uuid": "bd4ba012-87de-4fd5-bafa-6d6e620b075e", 00:15:48.202 "is_configured": true, 00:15:48.202 "data_offset": 2048, 00:15:48.202 "data_size": 63488 00:15:48.202 }, 00:15:48.202 { 00:15:48.202 "name": "BaseBdev4", 00:15:48.202 "uuid": "89d13f65-12ca-4238-9e83-945de078612f", 00:15:48.202 "is_configured": true, 00:15:48.202 "data_offset": 2048, 00:15:48.202 "data_size": 63488 00:15:48.202 } 00:15:48.202 ] 00:15:48.202 }' 00:15:48.202 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.202 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.462 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:48.462 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:48.462 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:48.462 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:48.462 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:48.462 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:48.462 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:48.462 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:48.462 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.462 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.462 [2024-12-06 09:53:13.633574] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:48.462 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.462 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:48.462 "name": "Existed_Raid", 00:15:48.462 "aliases": [ 00:15:48.462 "1a90194a-9f96-48dd-9af8-2fa817edf469" 00:15:48.462 ], 00:15:48.462 "product_name": "Raid Volume", 00:15:48.462 "block_size": 512, 00:15:48.462 "num_blocks": 190464, 00:15:48.462 "uuid": "1a90194a-9f96-48dd-9af8-2fa817edf469", 00:15:48.462 "assigned_rate_limits": { 00:15:48.462 "rw_ios_per_sec": 0, 00:15:48.462 "rw_mbytes_per_sec": 0, 00:15:48.462 "r_mbytes_per_sec": 0, 00:15:48.462 "w_mbytes_per_sec": 0 00:15:48.462 }, 00:15:48.462 "claimed": false, 00:15:48.462 "zoned": false, 00:15:48.462 "supported_io_types": { 00:15:48.462 "read": true, 00:15:48.462 "write": true, 00:15:48.462 "unmap": false, 00:15:48.462 "flush": false, 00:15:48.462 "reset": true, 00:15:48.462 "nvme_admin": false, 00:15:48.462 "nvme_io": false, 00:15:48.462 "nvme_io_md": false, 00:15:48.462 "write_zeroes": true, 00:15:48.462 "zcopy": false, 00:15:48.462 "get_zone_info": false, 00:15:48.462 "zone_management": false, 00:15:48.462 "zone_append": false, 00:15:48.462 "compare": false, 00:15:48.462 "compare_and_write": false, 00:15:48.462 "abort": false, 00:15:48.462 "seek_hole": false, 00:15:48.462 "seek_data": false, 00:15:48.462 "copy": false, 00:15:48.462 "nvme_iov_md": false 00:15:48.462 }, 00:15:48.462 "driver_specific": { 00:15:48.462 "raid": { 00:15:48.462 "uuid": "1a90194a-9f96-48dd-9af8-2fa817edf469", 00:15:48.462 "strip_size_kb": 64, 00:15:48.462 "state": "online", 00:15:48.462 "raid_level": "raid5f", 00:15:48.462 "superblock": true, 00:15:48.462 "num_base_bdevs": 4, 00:15:48.462 "num_base_bdevs_discovered": 4, 00:15:48.462 "num_base_bdevs_operational": 4, 00:15:48.462 "base_bdevs_list": [ 00:15:48.462 { 00:15:48.462 "name": "BaseBdev1", 00:15:48.462 "uuid": "a07ea55d-bba1-423a-831c-32d122440897", 00:15:48.462 "is_configured": true, 00:15:48.462 "data_offset": 2048, 00:15:48.462 "data_size": 63488 00:15:48.462 }, 00:15:48.462 { 00:15:48.462 "name": "BaseBdev2", 00:15:48.462 "uuid": "81eefe9d-8690-4860-9f03-51e4612af153", 00:15:48.462 "is_configured": true, 00:15:48.462 "data_offset": 2048, 00:15:48.462 "data_size": 63488 00:15:48.462 }, 00:15:48.462 { 00:15:48.462 "name": "BaseBdev3", 00:15:48.462 "uuid": "bd4ba012-87de-4fd5-bafa-6d6e620b075e", 00:15:48.462 "is_configured": true, 00:15:48.462 "data_offset": 2048, 00:15:48.462 "data_size": 63488 00:15:48.462 }, 00:15:48.462 { 00:15:48.462 "name": "BaseBdev4", 00:15:48.462 "uuid": "89d13f65-12ca-4238-9e83-945de078612f", 00:15:48.462 "is_configured": true, 00:15:48.462 "data_offset": 2048, 00:15:48.462 "data_size": 63488 00:15:48.462 } 00:15:48.462 ] 00:15:48.462 } 00:15:48.462 } 00:15:48.462 }' 00:15:48.462 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:48.462 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:48.462 BaseBdev2 00:15:48.462 BaseBdev3 00:15:48.462 BaseBdev4' 00:15:48.462 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.723 09:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.723 [2024-12-06 09:53:13.952850] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:48.983 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.983 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:48.983 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:48.983 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:48.983 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:48.983 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:48.983 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:48.983 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:48.983 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.983 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.983 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.983 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.983 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.983 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.983 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.983 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.983 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.983 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.983 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.983 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.983 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.983 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.983 "name": "Existed_Raid", 00:15:48.983 "uuid": "1a90194a-9f96-48dd-9af8-2fa817edf469", 00:15:48.983 "strip_size_kb": 64, 00:15:48.983 "state": "online", 00:15:48.983 "raid_level": "raid5f", 00:15:48.983 "superblock": true, 00:15:48.983 "num_base_bdevs": 4, 00:15:48.983 "num_base_bdevs_discovered": 3, 00:15:48.983 "num_base_bdevs_operational": 3, 00:15:48.983 "base_bdevs_list": [ 00:15:48.983 { 00:15:48.983 "name": null, 00:15:48.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.983 "is_configured": false, 00:15:48.983 "data_offset": 0, 00:15:48.983 "data_size": 63488 00:15:48.983 }, 00:15:48.983 { 00:15:48.983 "name": "BaseBdev2", 00:15:48.983 "uuid": "81eefe9d-8690-4860-9f03-51e4612af153", 00:15:48.983 "is_configured": true, 00:15:48.983 "data_offset": 2048, 00:15:48.983 "data_size": 63488 00:15:48.983 }, 00:15:48.983 { 00:15:48.983 "name": "BaseBdev3", 00:15:48.983 "uuid": "bd4ba012-87de-4fd5-bafa-6d6e620b075e", 00:15:48.983 "is_configured": true, 00:15:48.983 "data_offset": 2048, 00:15:48.983 "data_size": 63488 00:15:48.983 }, 00:15:48.983 { 00:15:48.983 "name": "BaseBdev4", 00:15:48.983 "uuid": "89d13f65-12ca-4238-9e83-945de078612f", 00:15:48.983 "is_configured": true, 00:15:48.983 "data_offset": 2048, 00:15:48.983 "data_size": 63488 00:15:48.983 } 00:15:48.983 ] 00:15:48.983 }' 00:15:48.983 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.983 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.243 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:49.243 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:49.243 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.243 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:49.243 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.243 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.243 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.243 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:49.243 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:49.243 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:49.243 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.243 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.243 [2024-12-06 09:53:14.458860] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:49.243 [2024-12-06 09:53:14.459021] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:49.503 [2024-12-06 09:53:14.552741] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:49.503 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.503 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:49.503 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:49.503 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.503 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.503 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.503 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:49.503 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.503 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:49.503 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:49.503 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:49.503 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.503 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.503 [2024-12-06 09:53:14.608642] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:49.503 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.503 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:49.503 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:49.503 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.503 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:49.503 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.503 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.503 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.503 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:49.503 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:49.503 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:49.503 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.503 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.503 [2024-12-06 09:53:14.761312] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:49.503 [2024-12-06 09:53:14.761361] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.763 BaseBdev2 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.763 [ 00:15:49.763 { 00:15:49.763 "name": "BaseBdev2", 00:15:49.763 "aliases": [ 00:15:49.763 "6c396001-0d79-4d14-9cd6-c35a3a47eb3d" 00:15:49.763 ], 00:15:49.763 "product_name": "Malloc disk", 00:15:49.763 "block_size": 512, 00:15:49.763 "num_blocks": 65536, 00:15:49.763 "uuid": "6c396001-0d79-4d14-9cd6-c35a3a47eb3d", 00:15:49.763 "assigned_rate_limits": { 00:15:49.763 "rw_ios_per_sec": 0, 00:15:49.763 "rw_mbytes_per_sec": 0, 00:15:49.763 "r_mbytes_per_sec": 0, 00:15:49.763 "w_mbytes_per_sec": 0 00:15:49.763 }, 00:15:49.763 "claimed": false, 00:15:49.763 "zoned": false, 00:15:49.763 "supported_io_types": { 00:15:49.763 "read": true, 00:15:49.763 "write": true, 00:15:49.763 "unmap": true, 00:15:49.763 "flush": true, 00:15:49.763 "reset": true, 00:15:49.763 "nvme_admin": false, 00:15:49.763 "nvme_io": false, 00:15:49.763 "nvme_io_md": false, 00:15:49.763 "write_zeroes": true, 00:15:49.763 "zcopy": true, 00:15:49.763 "get_zone_info": false, 00:15:49.763 "zone_management": false, 00:15:49.763 "zone_append": false, 00:15:49.763 "compare": false, 00:15:49.763 "compare_and_write": false, 00:15:49.763 "abort": true, 00:15:49.763 "seek_hole": false, 00:15:49.763 "seek_data": false, 00:15:49.763 "copy": true, 00:15:49.763 "nvme_iov_md": false 00:15:49.763 }, 00:15:49.763 "memory_domains": [ 00:15:49.763 { 00:15:49.763 "dma_device_id": "system", 00:15:49.763 "dma_device_type": 1 00:15:49.763 }, 00:15:49.763 { 00:15:49.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.763 "dma_device_type": 2 00:15:49.763 } 00:15:49.763 ], 00:15:49.763 "driver_specific": {} 00:15:49.763 } 00:15:49.763 ] 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.763 09:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.763 BaseBdev3 00:15:49.763 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.763 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:49.764 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.023 [ 00:15:50.023 { 00:15:50.023 "name": "BaseBdev3", 00:15:50.023 "aliases": [ 00:15:50.023 "836c1ddd-df4b-47f6-9fd4-d6682dd99f2d" 00:15:50.023 ], 00:15:50.023 "product_name": "Malloc disk", 00:15:50.023 "block_size": 512, 00:15:50.023 "num_blocks": 65536, 00:15:50.023 "uuid": "836c1ddd-df4b-47f6-9fd4-d6682dd99f2d", 00:15:50.023 "assigned_rate_limits": { 00:15:50.023 "rw_ios_per_sec": 0, 00:15:50.023 "rw_mbytes_per_sec": 0, 00:15:50.023 "r_mbytes_per_sec": 0, 00:15:50.023 "w_mbytes_per_sec": 0 00:15:50.023 }, 00:15:50.023 "claimed": false, 00:15:50.023 "zoned": false, 00:15:50.023 "supported_io_types": { 00:15:50.023 "read": true, 00:15:50.023 "write": true, 00:15:50.023 "unmap": true, 00:15:50.023 "flush": true, 00:15:50.023 "reset": true, 00:15:50.023 "nvme_admin": false, 00:15:50.023 "nvme_io": false, 00:15:50.023 "nvme_io_md": false, 00:15:50.023 "write_zeroes": true, 00:15:50.023 "zcopy": true, 00:15:50.023 "get_zone_info": false, 00:15:50.023 "zone_management": false, 00:15:50.023 "zone_append": false, 00:15:50.023 "compare": false, 00:15:50.023 "compare_and_write": false, 00:15:50.023 "abort": true, 00:15:50.023 "seek_hole": false, 00:15:50.023 "seek_data": false, 00:15:50.023 "copy": true, 00:15:50.023 "nvme_iov_md": false 00:15:50.023 }, 00:15:50.023 "memory_domains": [ 00:15:50.023 { 00:15:50.023 "dma_device_id": "system", 00:15:50.023 "dma_device_type": 1 00:15:50.023 }, 00:15:50.023 { 00:15:50.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.023 "dma_device_type": 2 00:15:50.023 } 00:15:50.023 ], 00:15:50.023 "driver_specific": {} 00:15:50.023 } 00:15:50.023 ] 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.023 BaseBdev4 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.023 [ 00:15:50.023 { 00:15:50.023 "name": "BaseBdev4", 00:15:50.023 "aliases": [ 00:15:50.023 "ec5eb1ce-878a-45aa-a6ba-1e2b9464569c" 00:15:50.023 ], 00:15:50.023 "product_name": "Malloc disk", 00:15:50.023 "block_size": 512, 00:15:50.023 "num_blocks": 65536, 00:15:50.023 "uuid": "ec5eb1ce-878a-45aa-a6ba-1e2b9464569c", 00:15:50.023 "assigned_rate_limits": { 00:15:50.023 "rw_ios_per_sec": 0, 00:15:50.023 "rw_mbytes_per_sec": 0, 00:15:50.023 "r_mbytes_per_sec": 0, 00:15:50.023 "w_mbytes_per_sec": 0 00:15:50.023 }, 00:15:50.023 "claimed": false, 00:15:50.023 "zoned": false, 00:15:50.023 "supported_io_types": { 00:15:50.023 "read": true, 00:15:50.023 "write": true, 00:15:50.023 "unmap": true, 00:15:50.023 "flush": true, 00:15:50.023 "reset": true, 00:15:50.023 "nvme_admin": false, 00:15:50.023 "nvme_io": false, 00:15:50.023 "nvme_io_md": false, 00:15:50.023 "write_zeroes": true, 00:15:50.023 "zcopy": true, 00:15:50.023 "get_zone_info": false, 00:15:50.023 "zone_management": false, 00:15:50.023 "zone_append": false, 00:15:50.023 "compare": false, 00:15:50.023 "compare_and_write": false, 00:15:50.023 "abort": true, 00:15:50.023 "seek_hole": false, 00:15:50.023 "seek_data": false, 00:15:50.023 "copy": true, 00:15:50.023 "nvme_iov_md": false 00:15:50.023 }, 00:15:50.023 "memory_domains": [ 00:15:50.023 { 00:15:50.023 "dma_device_id": "system", 00:15:50.023 "dma_device_type": 1 00:15:50.023 }, 00:15:50.023 { 00:15:50.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.023 "dma_device_type": 2 00:15:50.023 } 00:15:50.023 ], 00:15:50.023 "driver_specific": {} 00:15:50.023 } 00:15:50.023 ] 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.023 [2024-12-06 09:53:15.157412] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:50.023 [2024-12-06 09:53:15.157495] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:50.023 [2024-12-06 09:53:15.157536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:50.023 [2024-12-06 09:53:15.159255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:50.023 [2024-12-06 09:53:15.159357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.023 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.023 "name": "Existed_Raid", 00:15:50.023 "uuid": "0ef741ba-d7a2-4337-bd57-66794b997e35", 00:15:50.023 "strip_size_kb": 64, 00:15:50.023 "state": "configuring", 00:15:50.023 "raid_level": "raid5f", 00:15:50.023 "superblock": true, 00:15:50.023 "num_base_bdevs": 4, 00:15:50.023 "num_base_bdevs_discovered": 3, 00:15:50.023 "num_base_bdevs_operational": 4, 00:15:50.023 "base_bdevs_list": [ 00:15:50.023 { 00:15:50.023 "name": "BaseBdev1", 00:15:50.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.023 "is_configured": false, 00:15:50.023 "data_offset": 0, 00:15:50.023 "data_size": 0 00:15:50.023 }, 00:15:50.023 { 00:15:50.023 "name": "BaseBdev2", 00:15:50.023 "uuid": "6c396001-0d79-4d14-9cd6-c35a3a47eb3d", 00:15:50.023 "is_configured": true, 00:15:50.023 "data_offset": 2048, 00:15:50.023 "data_size": 63488 00:15:50.023 }, 00:15:50.023 { 00:15:50.023 "name": "BaseBdev3", 00:15:50.023 "uuid": "836c1ddd-df4b-47f6-9fd4-d6682dd99f2d", 00:15:50.023 "is_configured": true, 00:15:50.023 "data_offset": 2048, 00:15:50.023 "data_size": 63488 00:15:50.024 }, 00:15:50.024 { 00:15:50.024 "name": "BaseBdev4", 00:15:50.024 "uuid": "ec5eb1ce-878a-45aa-a6ba-1e2b9464569c", 00:15:50.024 "is_configured": true, 00:15:50.024 "data_offset": 2048, 00:15:50.024 "data_size": 63488 00:15:50.024 } 00:15:50.024 ] 00:15:50.024 }' 00:15:50.024 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.024 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.605 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:50.605 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.605 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.605 [2024-12-06 09:53:15.592693] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:50.605 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.605 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:50.605 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.605 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.605 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.605 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.605 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:50.605 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.605 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.605 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.605 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.605 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.605 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.605 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.605 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.605 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.605 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.605 "name": "Existed_Raid", 00:15:50.605 "uuid": "0ef741ba-d7a2-4337-bd57-66794b997e35", 00:15:50.605 "strip_size_kb": 64, 00:15:50.605 "state": "configuring", 00:15:50.605 "raid_level": "raid5f", 00:15:50.605 "superblock": true, 00:15:50.605 "num_base_bdevs": 4, 00:15:50.605 "num_base_bdevs_discovered": 2, 00:15:50.605 "num_base_bdevs_operational": 4, 00:15:50.605 "base_bdevs_list": [ 00:15:50.605 { 00:15:50.605 "name": "BaseBdev1", 00:15:50.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.605 "is_configured": false, 00:15:50.605 "data_offset": 0, 00:15:50.605 "data_size": 0 00:15:50.605 }, 00:15:50.605 { 00:15:50.605 "name": null, 00:15:50.605 "uuid": "6c396001-0d79-4d14-9cd6-c35a3a47eb3d", 00:15:50.605 "is_configured": false, 00:15:50.605 "data_offset": 0, 00:15:50.605 "data_size": 63488 00:15:50.605 }, 00:15:50.605 { 00:15:50.605 "name": "BaseBdev3", 00:15:50.605 "uuid": "836c1ddd-df4b-47f6-9fd4-d6682dd99f2d", 00:15:50.605 "is_configured": true, 00:15:50.605 "data_offset": 2048, 00:15:50.605 "data_size": 63488 00:15:50.605 }, 00:15:50.605 { 00:15:50.605 "name": "BaseBdev4", 00:15:50.605 "uuid": "ec5eb1ce-878a-45aa-a6ba-1e2b9464569c", 00:15:50.605 "is_configured": true, 00:15:50.605 "data_offset": 2048, 00:15:50.605 "data_size": 63488 00:15:50.605 } 00:15:50.605 ] 00:15:50.605 }' 00:15:50.605 09:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.605 09:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.893 [2024-12-06 09:53:16.112732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:50.893 BaseBdev1 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.893 [ 00:15:50.893 { 00:15:50.893 "name": "BaseBdev1", 00:15:50.893 "aliases": [ 00:15:50.893 "c1999781-b17c-44bd-895e-3af2b67536a4" 00:15:50.893 ], 00:15:50.893 "product_name": "Malloc disk", 00:15:50.893 "block_size": 512, 00:15:50.893 "num_blocks": 65536, 00:15:50.893 "uuid": "c1999781-b17c-44bd-895e-3af2b67536a4", 00:15:50.893 "assigned_rate_limits": { 00:15:50.893 "rw_ios_per_sec": 0, 00:15:50.893 "rw_mbytes_per_sec": 0, 00:15:50.893 "r_mbytes_per_sec": 0, 00:15:50.893 "w_mbytes_per_sec": 0 00:15:50.893 }, 00:15:50.893 "claimed": true, 00:15:50.893 "claim_type": "exclusive_write", 00:15:50.893 "zoned": false, 00:15:50.893 "supported_io_types": { 00:15:50.893 "read": true, 00:15:50.893 "write": true, 00:15:50.893 "unmap": true, 00:15:50.893 "flush": true, 00:15:50.893 "reset": true, 00:15:50.893 "nvme_admin": false, 00:15:50.893 "nvme_io": false, 00:15:50.893 "nvme_io_md": false, 00:15:50.893 "write_zeroes": true, 00:15:50.893 "zcopy": true, 00:15:50.893 "get_zone_info": false, 00:15:50.893 "zone_management": false, 00:15:50.893 "zone_append": false, 00:15:50.893 "compare": false, 00:15:50.893 "compare_and_write": false, 00:15:50.893 "abort": true, 00:15:50.893 "seek_hole": false, 00:15:50.893 "seek_data": false, 00:15:50.893 "copy": true, 00:15:50.893 "nvme_iov_md": false 00:15:50.893 }, 00:15:50.893 "memory_domains": [ 00:15:50.893 { 00:15:50.893 "dma_device_id": "system", 00:15:50.893 "dma_device_type": 1 00:15:50.893 }, 00:15:50.893 { 00:15:50.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.893 "dma_device_type": 2 00:15:50.893 } 00:15:50.893 ], 00:15:50.893 "driver_specific": {} 00:15:50.893 } 00:15:50.893 ] 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.893 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.894 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.894 09:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.894 09:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.153 09:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.153 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.153 "name": "Existed_Raid", 00:15:51.153 "uuid": "0ef741ba-d7a2-4337-bd57-66794b997e35", 00:15:51.153 "strip_size_kb": 64, 00:15:51.153 "state": "configuring", 00:15:51.153 "raid_level": "raid5f", 00:15:51.153 "superblock": true, 00:15:51.153 "num_base_bdevs": 4, 00:15:51.153 "num_base_bdevs_discovered": 3, 00:15:51.153 "num_base_bdevs_operational": 4, 00:15:51.153 "base_bdevs_list": [ 00:15:51.153 { 00:15:51.153 "name": "BaseBdev1", 00:15:51.153 "uuid": "c1999781-b17c-44bd-895e-3af2b67536a4", 00:15:51.153 "is_configured": true, 00:15:51.153 "data_offset": 2048, 00:15:51.153 "data_size": 63488 00:15:51.153 }, 00:15:51.153 { 00:15:51.153 "name": null, 00:15:51.153 "uuid": "6c396001-0d79-4d14-9cd6-c35a3a47eb3d", 00:15:51.153 "is_configured": false, 00:15:51.153 "data_offset": 0, 00:15:51.153 "data_size": 63488 00:15:51.153 }, 00:15:51.153 { 00:15:51.153 "name": "BaseBdev3", 00:15:51.153 "uuid": "836c1ddd-df4b-47f6-9fd4-d6682dd99f2d", 00:15:51.153 "is_configured": true, 00:15:51.153 "data_offset": 2048, 00:15:51.153 "data_size": 63488 00:15:51.153 }, 00:15:51.153 { 00:15:51.153 "name": "BaseBdev4", 00:15:51.153 "uuid": "ec5eb1ce-878a-45aa-a6ba-1e2b9464569c", 00:15:51.153 "is_configured": true, 00:15:51.153 "data_offset": 2048, 00:15:51.153 "data_size": 63488 00:15:51.153 } 00:15:51.153 ] 00:15:51.153 }' 00:15:51.153 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.153 09:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.413 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:51.413 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.413 09:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.413 09:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.413 09:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.413 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:51.413 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:51.413 09:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.413 09:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.413 [2024-12-06 09:53:16.659902] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:51.413 09:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.413 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:51.413 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.413 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.413 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.413 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.413 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:51.413 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.413 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.413 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.413 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.413 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.413 09:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.413 09:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.413 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.413 09:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.673 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.673 "name": "Existed_Raid", 00:15:51.673 "uuid": "0ef741ba-d7a2-4337-bd57-66794b997e35", 00:15:51.673 "strip_size_kb": 64, 00:15:51.673 "state": "configuring", 00:15:51.673 "raid_level": "raid5f", 00:15:51.673 "superblock": true, 00:15:51.673 "num_base_bdevs": 4, 00:15:51.673 "num_base_bdevs_discovered": 2, 00:15:51.673 "num_base_bdevs_operational": 4, 00:15:51.673 "base_bdevs_list": [ 00:15:51.673 { 00:15:51.673 "name": "BaseBdev1", 00:15:51.673 "uuid": "c1999781-b17c-44bd-895e-3af2b67536a4", 00:15:51.673 "is_configured": true, 00:15:51.673 "data_offset": 2048, 00:15:51.673 "data_size": 63488 00:15:51.673 }, 00:15:51.673 { 00:15:51.673 "name": null, 00:15:51.673 "uuid": "6c396001-0d79-4d14-9cd6-c35a3a47eb3d", 00:15:51.673 "is_configured": false, 00:15:51.673 "data_offset": 0, 00:15:51.673 "data_size": 63488 00:15:51.673 }, 00:15:51.673 { 00:15:51.673 "name": null, 00:15:51.673 "uuid": "836c1ddd-df4b-47f6-9fd4-d6682dd99f2d", 00:15:51.673 "is_configured": false, 00:15:51.673 "data_offset": 0, 00:15:51.673 "data_size": 63488 00:15:51.673 }, 00:15:51.673 { 00:15:51.673 "name": "BaseBdev4", 00:15:51.673 "uuid": "ec5eb1ce-878a-45aa-a6ba-1e2b9464569c", 00:15:51.673 "is_configured": true, 00:15:51.673 "data_offset": 2048, 00:15:51.673 "data_size": 63488 00:15:51.673 } 00:15:51.673 ] 00:15:51.673 }' 00:15:51.673 09:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.673 09:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.933 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:51.933 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.933 09:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.933 09:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.933 09:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.933 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:51.933 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:51.933 09:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.933 09:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.933 [2024-12-06 09:53:17.119103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:51.933 09:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.933 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:51.933 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.933 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.933 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.933 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.933 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:51.933 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.933 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.933 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.933 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.933 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.933 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.933 09:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.933 09:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.933 09:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.933 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.933 "name": "Existed_Raid", 00:15:51.933 "uuid": "0ef741ba-d7a2-4337-bd57-66794b997e35", 00:15:51.933 "strip_size_kb": 64, 00:15:51.933 "state": "configuring", 00:15:51.933 "raid_level": "raid5f", 00:15:51.934 "superblock": true, 00:15:51.934 "num_base_bdevs": 4, 00:15:51.934 "num_base_bdevs_discovered": 3, 00:15:51.934 "num_base_bdevs_operational": 4, 00:15:51.934 "base_bdevs_list": [ 00:15:51.934 { 00:15:51.934 "name": "BaseBdev1", 00:15:51.934 "uuid": "c1999781-b17c-44bd-895e-3af2b67536a4", 00:15:51.934 "is_configured": true, 00:15:51.934 "data_offset": 2048, 00:15:51.934 "data_size": 63488 00:15:51.934 }, 00:15:51.934 { 00:15:51.934 "name": null, 00:15:51.934 "uuid": "6c396001-0d79-4d14-9cd6-c35a3a47eb3d", 00:15:51.934 "is_configured": false, 00:15:51.934 "data_offset": 0, 00:15:51.934 "data_size": 63488 00:15:51.934 }, 00:15:51.934 { 00:15:51.934 "name": "BaseBdev3", 00:15:51.934 "uuid": "836c1ddd-df4b-47f6-9fd4-d6682dd99f2d", 00:15:51.934 "is_configured": true, 00:15:51.934 "data_offset": 2048, 00:15:51.934 "data_size": 63488 00:15:51.934 }, 00:15:51.934 { 00:15:51.934 "name": "BaseBdev4", 00:15:51.934 "uuid": "ec5eb1ce-878a-45aa-a6ba-1e2b9464569c", 00:15:51.934 "is_configured": true, 00:15:51.934 "data_offset": 2048, 00:15:51.934 "data_size": 63488 00:15:51.934 } 00:15:51.934 ] 00:15:51.934 }' 00:15:51.934 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.934 09:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.504 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.504 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:52.504 09:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.504 09:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.504 09:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.504 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:52.504 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:52.504 09:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.504 09:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.504 [2024-12-06 09:53:17.614293] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:52.504 09:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.504 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:52.504 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.504 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:52.504 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.504 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.504 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:52.504 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.504 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.504 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.504 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.504 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.504 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.504 09:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.504 09:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.504 09:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.504 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.504 "name": "Existed_Raid", 00:15:52.504 "uuid": "0ef741ba-d7a2-4337-bd57-66794b997e35", 00:15:52.504 "strip_size_kb": 64, 00:15:52.504 "state": "configuring", 00:15:52.504 "raid_level": "raid5f", 00:15:52.504 "superblock": true, 00:15:52.504 "num_base_bdevs": 4, 00:15:52.504 "num_base_bdevs_discovered": 2, 00:15:52.504 "num_base_bdevs_operational": 4, 00:15:52.504 "base_bdevs_list": [ 00:15:52.504 { 00:15:52.504 "name": null, 00:15:52.504 "uuid": "c1999781-b17c-44bd-895e-3af2b67536a4", 00:15:52.504 "is_configured": false, 00:15:52.504 "data_offset": 0, 00:15:52.504 "data_size": 63488 00:15:52.504 }, 00:15:52.504 { 00:15:52.504 "name": null, 00:15:52.504 "uuid": "6c396001-0d79-4d14-9cd6-c35a3a47eb3d", 00:15:52.504 "is_configured": false, 00:15:52.504 "data_offset": 0, 00:15:52.504 "data_size": 63488 00:15:52.504 }, 00:15:52.504 { 00:15:52.504 "name": "BaseBdev3", 00:15:52.504 "uuid": "836c1ddd-df4b-47f6-9fd4-d6682dd99f2d", 00:15:52.504 "is_configured": true, 00:15:52.504 "data_offset": 2048, 00:15:52.504 "data_size": 63488 00:15:52.504 }, 00:15:52.504 { 00:15:52.504 "name": "BaseBdev4", 00:15:52.504 "uuid": "ec5eb1ce-878a-45aa-a6ba-1e2b9464569c", 00:15:52.504 "is_configured": true, 00:15:52.504 "data_offset": 2048, 00:15:52.504 "data_size": 63488 00:15:52.504 } 00:15:52.504 ] 00:15:52.504 }' 00:15:52.504 09:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.504 09:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.075 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.075 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.075 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.075 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:53.075 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.075 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:53.075 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:53.075 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.075 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.075 [2024-12-06 09:53:18.216872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:53.075 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.075 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:53.075 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:53.075 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:53.075 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.075 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.075 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:53.075 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.075 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.075 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.075 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.075 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.075 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.075 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.075 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.075 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.075 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.075 "name": "Existed_Raid", 00:15:53.075 "uuid": "0ef741ba-d7a2-4337-bd57-66794b997e35", 00:15:53.075 "strip_size_kb": 64, 00:15:53.075 "state": "configuring", 00:15:53.075 "raid_level": "raid5f", 00:15:53.075 "superblock": true, 00:15:53.075 "num_base_bdevs": 4, 00:15:53.075 "num_base_bdevs_discovered": 3, 00:15:53.075 "num_base_bdevs_operational": 4, 00:15:53.075 "base_bdevs_list": [ 00:15:53.075 { 00:15:53.075 "name": null, 00:15:53.075 "uuid": "c1999781-b17c-44bd-895e-3af2b67536a4", 00:15:53.075 "is_configured": false, 00:15:53.075 "data_offset": 0, 00:15:53.075 "data_size": 63488 00:15:53.075 }, 00:15:53.075 { 00:15:53.075 "name": "BaseBdev2", 00:15:53.075 "uuid": "6c396001-0d79-4d14-9cd6-c35a3a47eb3d", 00:15:53.075 "is_configured": true, 00:15:53.075 "data_offset": 2048, 00:15:53.075 "data_size": 63488 00:15:53.075 }, 00:15:53.075 { 00:15:53.075 "name": "BaseBdev3", 00:15:53.075 "uuid": "836c1ddd-df4b-47f6-9fd4-d6682dd99f2d", 00:15:53.075 "is_configured": true, 00:15:53.075 "data_offset": 2048, 00:15:53.075 "data_size": 63488 00:15:53.075 }, 00:15:53.076 { 00:15:53.076 "name": "BaseBdev4", 00:15:53.076 "uuid": "ec5eb1ce-878a-45aa-a6ba-1e2b9464569c", 00:15:53.076 "is_configured": true, 00:15:53.076 "data_offset": 2048, 00:15:53.076 "data_size": 63488 00:15:53.076 } 00:15:53.076 ] 00:15:53.076 }' 00:15:53.076 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.076 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.644 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:53.644 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.644 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.644 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.644 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.644 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:53.644 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.644 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:53.644 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.644 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.644 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.644 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c1999781-b17c-44bd-895e-3af2b67536a4 00:15:53.644 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.644 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.644 [2024-12-06 09:53:18.747538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:53.644 [2024-12-06 09:53:18.747763] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:53.644 [2024-12-06 09:53:18.747775] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:53.644 [2024-12-06 09:53:18.748018] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:53.644 NewBaseBdev 00:15:53.644 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.644 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:53.644 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:53.644 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:53.644 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:53.644 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:53.644 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:53.644 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:53.644 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.644 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.644 [2024-12-06 09:53:18.754812] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:53.644 [2024-12-06 09:53:18.754878] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:53.644 [2024-12-06 09:53:18.755132] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:53.644 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.644 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:53.644 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.644 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.644 [ 00:15:53.644 { 00:15:53.644 "name": "NewBaseBdev", 00:15:53.644 "aliases": [ 00:15:53.644 "c1999781-b17c-44bd-895e-3af2b67536a4" 00:15:53.644 ], 00:15:53.644 "product_name": "Malloc disk", 00:15:53.644 "block_size": 512, 00:15:53.644 "num_blocks": 65536, 00:15:53.644 "uuid": "c1999781-b17c-44bd-895e-3af2b67536a4", 00:15:53.644 "assigned_rate_limits": { 00:15:53.644 "rw_ios_per_sec": 0, 00:15:53.644 "rw_mbytes_per_sec": 0, 00:15:53.644 "r_mbytes_per_sec": 0, 00:15:53.644 "w_mbytes_per_sec": 0 00:15:53.644 }, 00:15:53.644 "claimed": true, 00:15:53.644 "claim_type": "exclusive_write", 00:15:53.644 "zoned": false, 00:15:53.644 "supported_io_types": { 00:15:53.644 "read": true, 00:15:53.644 "write": true, 00:15:53.644 "unmap": true, 00:15:53.644 "flush": true, 00:15:53.644 "reset": true, 00:15:53.644 "nvme_admin": false, 00:15:53.644 "nvme_io": false, 00:15:53.644 "nvme_io_md": false, 00:15:53.644 "write_zeroes": true, 00:15:53.644 "zcopy": true, 00:15:53.644 "get_zone_info": false, 00:15:53.645 "zone_management": false, 00:15:53.645 "zone_append": false, 00:15:53.645 "compare": false, 00:15:53.645 "compare_and_write": false, 00:15:53.645 "abort": true, 00:15:53.645 "seek_hole": false, 00:15:53.645 "seek_data": false, 00:15:53.645 "copy": true, 00:15:53.645 "nvme_iov_md": false 00:15:53.645 }, 00:15:53.645 "memory_domains": [ 00:15:53.645 { 00:15:53.645 "dma_device_id": "system", 00:15:53.645 "dma_device_type": 1 00:15:53.645 }, 00:15:53.645 { 00:15:53.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.645 "dma_device_type": 2 00:15:53.645 } 00:15:53.645 ], 00:15:53.645 "driver_specific": {} 00:15:53.645 } 00:15:53.645 ] 00:15:53.645 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.645 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:53.645 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:53.645 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:53.645 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.645 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.645 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.645 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:53.645 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.645 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.645 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.645 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.645 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.645 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.645 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.645 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.645 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.645 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.645 "name": "Existed_Raid", 00:15:53.645 "uuid": "0ef741ba-d7a2-4337-bd57-66794b997e35", 00:15:53.645 "strip_size_kb": 64, 00:15:53.645 "state": "online", 00:15:53.645 "raid_level": "raid5f", 00:15:53.645 "superblock": true, 00:15:53.645 "num_base_bdevs": 4, 00:15:53.645 "num_base_bdevs_discovered": 4, 00:15:53.645 "num_base_bdevs_operational": 4, 00:15:53.645 "base_bdevs_list": [ 00:15:53.645 { 00:15:53.645 "name": "NewBaseBdev", 00:15:53.645 "uuid": "c1999781-b17c-44bd-895e-3af2b67536a4", 00:15:53.645 "is_configured": true, 00:15:53.645 "data_offset": 2048, 00:15:53.645 "data_size": 63488 00:15:53.645 }, 00:15:53.645 { 00:15:53.645 "name": "BaseBdev2", 00:15:53.645 "uuid": "6c396001-0d79-4d14-9cd6-c35a3a47eb3d", 00:15:53.645 "is_configured": true, 00:15:53.645 "data_offset": 2048, 00:15:53.645 "data_size": 63488 00:15:53.645 }, 00:15:53.645 { 00:15:53.645 "name": "BaseBdev3", 00:15:53.645 "uuid": "836c1ddd-df4b-47f6-9fd4-d6682dd99f2d", 00:15:53.645 "is_configured": true, 00:15:53.645 "data_offset": 2048, 00:15:53.645 "data_size": 63488 00:15:53.645 }, 00:15:53.645 { 00:15:53.645 "name": "BaseBdev4", 00:15:53.645 "uuid": "ec5eb1ce-878a-45aa-a6ba-1e2b9464569c", 00:15:53.645 "is_configured": true, 00:15:53.645 "data_offset": 2048, 00:15:53.645 "data_size": 63488 00:15:53.645 } 00:15:53.645 ] 00:15:53.645 }' 00:15:53.645 09:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.645 09:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.213 [2024-12-06 09:53:19.250512] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:54.213 "name": "Existed_Raid", 00:15:54.213 "aliases": [ 00:15:54.213 "0ef741ba-d7a2-4337-bd57-66794b997e35" 00:15:54.213 ], 00:15:54.213 "product_name": "Raid Volume", 00:15:54.213 "block_size": 512, 00:15:54.213 "num_blocks": 190464, 00:15:54.213 "uuid": "0ef741ba-d7a2-4337-bd57-66794b997e35", 00:15:54.213 "assigned_rate_limits": { 00:15:54.213 "rw_ios_per_sec": 0, 00:15:54.213 "rw_mbytes_per_sec": 0, 00:15:54.213 "r_mbytes_per_sec": 0, 00:15:54.213 "w_mbytes_per_sec": 0 00:15:54.213 }, 00:15:54.213 "claimed": false, 00:15:54.213 "zoned": false, 00:15:54.213 "supported_io_types": { 00:15:54.213 "read": true, 00:15:54.213 "write": true, 00:15:54.213 "unmap": false, 00:15:54.213 "flush": false, 00:15:54.213 "reset": true, 00:15:54.213 "nvme_admin": false, 00:15:54.213 "nvme_io": false, 00:15:54.213 "nvme_io_md": false, 00:15:54.213 "write_zeroes": true, 00:15:54.213 "zcopy": false, 00:15:54.213 "get_zone_info": false, 00:15:54.213 "zone_management": false, 00:15:54.213 "zone_append": false, 00:15:54.213 "compare": false, 00:15:54.213 "compare_and_write": false, 00:15:54.213 "abort": false, 00:15:54.213 "seek_hole": false, 00:15:54.213 "seek_data": false, 00:15:54.213 "copy": false, 00:15:54.213 "nvme_iov_md": false 00:15:54.213 }, 00:15:54.213 "driver_specific": { 00:15:54.213 "raid": { 00:15:54.213 "uuid": "0ef741ba-d7a2-4337-bd57-66794b997e35", 00:15:54.213 "strip_size_kb": 64, 00:15:54.213 "state": "online", 00:15:54.213 "raid_level": "raid5f", 00:15:54.213 "superblock": true, 00:15:54.213 "num_base_bdevs": 4, 00:15:54.213 "num_base_bdevs_discovered": 4, 00:15:54.213 "num_base_bdevs_operational": 4, 00:15:54.213 "base_bdevs_list": [ 00:15:54.213 { 00:15:54.213 "name": "NewBaseBdev", 00:15:54.213 "uuid": "c1999781-b17c-44bd-895e-3af2b67536a4", 00:15:54.213 "is_configured": true, 00:15:54.213 "data_offset": 2048, 00:15:54.213 "data_size": 63488 00:15:54.213 }, 00:15:54.213 { 00:15:54.213 "name": "BaseBdev2", 00:15:54.213 "uuid": "6c396001-0d79-4d14-9cd6-c35a3a47eb3d", 00:15:54.213 "is_configured": true, 00:15:54.213 "data_offset": 2048, 00:15:54.213 "data_size": 63488 00:15:54.213 }, 00:15:54.213 { 00:15:54.213 "name": "BaseBdev3", 00:15:54.213 "uuid": "836c1ddd-df4b-47f6-9fd4-d6682dd99f2d", 00:15:54.213 "is_configured": true, 00:15:54.213 "data_offset": 2048, 00:15:54.213 "data_size": 63488 00:15:54.213 }, 00:15:54.213 { 00:15:54.213 "name": "BaseBdev4", 00:15:54.213 "uuid": "ec5eb1ce-878a-45aa-a6ba-1e2b9464569c", 00:15:54.213 "is_configured": true, 00:15:54.213 "data_offset": 2048, 00:15:54.213 "data_size": 63488 00:15:54.213 } 00:15:54.213 ] 00:15:54.213 } 00:15:54.213 } 00:15:54.213 }' 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:54.213 BaseBdev2 00:15:54.213 BaseBdev3 00:15:54.213 BaseBdev4' 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.213 09:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.472 09:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:54.472 09:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:54.472 09:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:54.472 09:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:54.472 09:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:54.472 09:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.472 09:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.472 09:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.472 09:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:54.473 09:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:54.473 09:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:54.473 09:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.473 09:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.473 [2024-12-06 09:53:19.557732] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:54.473 [2024-12-06 09:53:19.557761] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:54.473 [2024-12-06 09:53:19.557829] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:54.473 [2024-12-06 09:53:19.558115] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:54.473 [2024-12-06 09:53:19.558127] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:54.473 09:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.473 09:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83306 00:15:54.473 09:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83306 ']' 00:15:54.473 09:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83306 00:15:54.473 09:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:54.473 09:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:54.473 09:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83306 00:15:54.473 09:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:54.473 killing process with pid 83306 00:15:54.473 09:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:54.473 09:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83306' 00:15:54.473 09:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83306 00:15:54.473 [2024-12-06 09:53:19.606090] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:54.473 09:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83306 00:15:54.732 [2024-12-06 09:53:19.983799] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:56.110 09:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:56.110 00:15:56.110 real 0m11.325s 00:15:56.110 user 0m17.921s 00:15:56.110 sys 0m2.116s 00:15:56.110 ************************************ 00:15:56.110 END TEST raid5f_state_function_test_sb 00:15:56.110 ************************************ 00:15:56.110 09:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:56.110 09:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.110 09:53:21 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:15:56.110 09:53:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:56.110 09:53:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:56.110 09:53:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:56.110 ************************************ 00:15:56.110 START TEST raid5f_superblock_test 00:15:56.110 ************************************ 00:15:56.110 09:53:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:15:56.110 09:53:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:56.110 09:53:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:56.110 09:53:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:56.110 09:53:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:56.110 09:53:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:56.110 09:53:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:56.110 09:53:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:56.110 09:53:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:56.110 09:53:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:56.110 09:53:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:56.110 09:53:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:56.110 09:53:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:56.110 09:53:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:56.110 09:53:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:56.110 09:53:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:56.110 09:53:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:56.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.110 09:53:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83971 00:15:56.110 09:53:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83971 00:15:56.110 09:53:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:56.110 09:53:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 83971 ']' 00:15:56.110 09:53:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.110 09:53:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:56.110 09:53:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.110 09:53:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:56.110 09:53:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.110 [2024-12-06 09:53:21.261796] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:15:56.110 [2024-12-06 09:53:21.261998] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83971 ] 00:15:56.369 [2024-12-06 09:53:21.436865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.369 [2024-12-06 09:53:21.545610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.627 [2024-12-06 09:53:21.740275] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:56.627 [2024-12-06 09:53:21.740403] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:56.887 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:56.887 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:56.887 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:56.887 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:56.887 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:56.887 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:56.887 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:56.887 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:56.887 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:56.887 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:56.887 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:56.887 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.887 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.887 malloc1 00:15:56.887 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.887 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:56.887 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.887 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.887 [2024-12-06 09:53:22.139115] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:56.887 [2024-12-06 09:53:22.139245] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.887 [2024-12-06 09:53:22.139287] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:56.887 [2024-12-06 09:53:22.139317] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.887 [2024-12-06 09:53:22.141377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.887 [2024-12-06 09:53:22.141449] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:56.887 pt1 00:15:56.887 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.887 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:56.887 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:56.887 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:56.887 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:56.887 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:56.887 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:56.887 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:56.887 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:56.887 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:56.887 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.887 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.147 malloc2 00:15:57.147 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.147 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:57.147 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.147 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.147 [2024-12-06 09:53:22.194619] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:57.147 [2024-12-06 09:53:22.194713] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.147 [2024-12-06 09:53:22.194754] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:57.147 [2024-12-06 09:53:22.194782] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.147 [2024-12-06 09:53:22.196790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.147 [2024-12-06 09:53:22.196861] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:57.147 pt2 00:15:57.147 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.147 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:57.147 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:57.147 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:57.147 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:57.147 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:57.147 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.148 malloc3 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.148 [2024-12-06 09:53:22.265064] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:57.148 [2024-12-06 09:53:22.265164] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.148 [2024-12-06 09:53:22.265205] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:57.148 [2024-12-06 09:53:22.265234] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.148 [2024-12-06 09:53:22.267243] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.148 [2024-12-06 09:53:22.267310] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:57.148 pt3 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.148 malloc4 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.148 [2024-12-06 09:53:22.317846] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:57.148 [2024-12-06 09:53:22.317950] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.148 [2024-12-06 09:53:22.317991] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:57.148 [2024-12-06 09:53:22.318020] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.148 [2024-12-06 09:53:22.320193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.148 [2024-12-06 09:53:22.320268] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:57.148 pt4 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.148 [2024-12-06 09:53:22.329861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:57.148 [2024-12-06 09:53:22.331721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:57.148 [2024-12-06 09:53:22.331844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:57.148 [2024-12-06 09:53:22.331939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:57.148 [2024-12-06 09:53:22.332164] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:57.148 [2024-12-06 09:53:22.332215] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:57.148 [2024-12-06 09:53:22.332484] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:57.148 [2024-12-06 09:53:22.339623] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:57.148 [2024-12-06 09:53:22.339680] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:57.148 [2024-12-06 09:53:22.339910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.148 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.148 "name": "raid_bdev1", 00:15:57.148 "uuid": "54ec549e-d1c4-41a8-94a5-54407b57afa2", 00:15:57.148 "strip_size_kb": 64, 00:15:57.148 "state": "online", 00:15:57.148 "raid_level": "raid5f", 00:15:57.148 "superblock": true, 00:15:57.148 "num_base_bdevs": 4, 00:15:57.148 "num_base_bdevs_discovered": 4, 00:15:57.148 "num_base_bdevs_operational": 4, 00:15:57.148 "base_bdevs_list": [ 00:15:57.148 { 00:15:57.148 "name": "pt1", 00:15:57.148 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:57.148 "is_configured": true, 00:15:57.148 "data_offset": 2048, 00:15:57.148 "data_size": 63488 00:15:57.148 }, 00:15:57.148 { 00:15:57.148 "name": "pt2", 00:15:57.148 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:57.148 "is_configured": true, 00:15:57.148 "data_offset": 2048, 00:15:57.148 "data_size": 63488 00:15:57.148 }, 00:15:57.148 { 00:15:57.148 "name": "pt3", 00:15:57.148 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:57.149 "is_configured": true, 00:15:57.149 "data_offset": 2048, 00:15:57.149 "data_size": 63488 00:15:57.149 }, 00:15:57.149 { 00:15:57.149 "name": "pt4", 00:15:57.149 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:57.149 "is_configured": true, 00:15:57.149 "data_offset": 2048, 00:15:57.149 "data_size": 63488 00:15:57.149 } 00:15:57.149 ] 00:15:57.149 }' 00:15:57.149 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.149 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.715 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:57.715 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:57.715 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:57.715 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:57.715 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:57.715 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:57.715 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:57.715 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.715 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.715 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:57.716 [2024-12-06 09:53:22.792593] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:57.716 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.716 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:57.716 "name": "raid_bdev1", 00:15:57.716 "aliases": [ 00:15:57.716 "54ec549e-d1c4-41a8-94a5-54407b57afa2" 00:15:57.716 ], 00:15:57.716 "product_name": "Raid Volume", 00:15:57.716 "block_size": 512, 00:15:57.716 "num_blocks": 190464, 00:15:57.716 "uuid": "54ec549e-d1c4-41a8-94a5-54407b57afa2", 00:15:57.716 "assigned_rate_limits": { 00:15:57.716 "rw_ios_per_sec": 0, 00:15:57.716 "rw_mbytes_per_sec": 0, 00:15:57.716 "r_mbytes_per_sec": 0, 00:15:57.716 "w_mbytes_per_sec": 0 00:15:57.716 }, 00:15:57.716 "claimed": false, 00:15:57.716 "zoned": false, 00:15:57.716 "supported_io_types": { 00:15:57.716 "read": true, 00:15:57.716 "write": true, 00:15:57.716 "unmap": false, 00:15:57.716 "flush": false, 00:15:57.716 "reset": true, 00:15:57.716 "nvme_admin": false, 00:15:57.716 "nvme_io": false, 00:15:57.716 "nvme_io_md": false, 00:15:57.716 "write_zeroes": true, 00:15:57.716 "zcopy": false, 00:15:57.716 "get_zone_info": false, 00:15:57.716 "zone_management": false, 00:15:57.716 "zone_append": false, 00:15:57.716 "compare": false, 00:15:57.716 "compare_and_write": false, 00:15:57.716 "abort": false, 00:15:57.716 "seek_hole": false, 00:15:57.716 "seek_data": false, 00:15:57.716 "copy": false, 00:15:57.716 "nvme_iov_md": false 00:15:57.716 }, 00:15:57.716 "driver_specific": { 00:15:57.716 "raid": { 00:15:57.716 "uuid": "54ec549e-d1c4-41a8-94a5-54407b57afa2", 00:15:57.716 "strip_size_kb": 64, 00:15:57.716 "state": "online", 00:15:57.716 "raid_level": "raid5f", 00:15:57.716 "superblock": true, 00:15:57.716 "num_base_bdevs": 4, 00:15:57.716 "num_base_bdevs_discovered": 4, 00:15:57.716 "num_base_bdevs_operational": 4, 00:15:57.716 "base_bdevs_list": [ 00:15:57.716 { 00:15:57.716 "name": "pt1", 00:15:57.716 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:57.716 "is_configured": true, 00:15:57.716 "data_offset": 2048, 00:15:57.716 "data_size": 63488 00:15:57.716 }, 00:15:57.716 { 00:15:57.716 "name": "pt2", 00:15:57.716 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:57.716 "is_configured": true, 00:15:57.716 "data_offset": 2048, 00:15:57.716 "data_size": 63488 00:15:57.716 }, 00:15:57.716 { 00:15:57.716 "name": "pt3", 00:15:57.716 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:57.716 "is_configured": true, 00:15:57.716 "data_offset": 2048, 00:15:57.716 "data_size": 63488 00:15:57.716 }, 00:15:57.716 { 00:15:57.716 "name": "pt4", 00:15:57.716 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:57.716 "is_configured": true, 00:15:57.716 "data_offset": 2048, 00:15:57.716 "data_size": 63488 00:15:57.716 } 00:15:57.716 ] 00:15:57.716 } 00:15:57.716 } 00:15:57.716 }' 00:15:57.716 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:57.716 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:57.716 pt2 00:15:57.716 pt3 00:15:57.716 pt4' 00:15:57.716 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.716 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:57.716 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:57.716 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.716 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:57.716 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.716 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.716 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.716 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:57.716 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:57.716 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:57.975 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:57.975 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.975 09:53:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.975 09:53:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:57.975 [2024-12-06 09:53:23.143983] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=54ec549e-d1c4-41a8-94a5-54407b57afa2 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 54ec549e-d1c4-41a8-94a5-54407b57afa2 ']' 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.975 [2024-12-06 09:53:23.171768] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:57.975 [2024-12-06 09:53:23.171830] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:57.975 [2024-12-06 09:53:23.171925] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:57.975 [2024-12-06 09:53:23.172042] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:57.975 [2024-12-06 09:53:23.172091] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.975 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.976 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:57.976 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:57.976 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.976 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.235 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.235 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:58.235 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:58.235 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.235 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.235 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.235 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:58.235 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:58.235 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.235 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.235 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.235 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:58.235 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.235 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.235 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:58.235 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.235 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:58.235 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:58.235 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:58.235 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:58.235 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:58.235 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:58.235 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:58.235 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:58.235 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:58.235 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.235 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.235 [2024-12-06 09:53:23.335511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:58.235 [2024-12-06 09:53:23.337400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:58.235 [2024-12-06 09:53:23.337490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:58.235 [2024-12-06 09:53:23.337541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:58.235 [2024-12-06 09:53:23.337629] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:58.235 [2024-12-06 09:53:23.337723] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:58.236 [2024-12-06 09:53:23.337805] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:58.236 [2024-12-06 09:53:23.337857] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:58.236 [2024-12-06 09:53:23.337871] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:58.236 [2024-12-06 09:53:23.337881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:58.236 request: 00:15:58.236 { 00:15:58.236 "name": "raid_bdev1", 00:15:58.236 "raid_level": "raid5f", 00:15:58.236 "base_bdevs": [ 00:15:58.236 "malloc1", 00:15:58.236 "malloc2", 00:15:58.236 "malloc3", 00:15:58.236 "malloc4" 00:15:58.236 ], 00:15:58.236 "strip_size_kb": 64, 00:15:58.236 "superblock": false, 00:15:58.236 "method": "bdev_raid_create", 00:15:58.236 "req_id": 1 00:15:58.236 } 00:15:58.236 Got JSON-RPC error response 00:15:58.236 response: 00:15:58.236 { 00:15:58.236 "code": -17, 00:15:58.236 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:58.236 } 00:15:58.236 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:58.236 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:58.236 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:58.236 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:58.236 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:58.236 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:58.236 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.236 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.236 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.236 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.236 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:58.236 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:58.236 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:58.236 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.236 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.236 [2024-12-06 09:53:23.399393] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:58.236 [2024-12-06 09:53:23.399486] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.236 [2024-12-06 09:53:23.399520] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:58.236 [2024-12-06 09:53:23.399550] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.236 [2024-12-06 09:53:23.401805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.236 [2024-12-06 09:53:23.401884] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:58.236 [2024-12-06 09:53:23.401986] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:58.236 [2024-12-06 09:53:23.402065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:58.236 pt1 00:15:58.236 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.236 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:58.236 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.236 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:58.236 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.236 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.236 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:58.236 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.236 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.236 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.236 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.236 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.236 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.236 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.236 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.236 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.236 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.236 "name": "raid_bdev1", 00:15:58.236 "uuid": "54ec549e-d1c4-41a8-94a5-54407b57afa2", 00:15:58.236 "strip_size_kb": 64, 00:15:58.236 "state": "configuring", 00:15:58.236 "raid_level": "raid5f", 00:15:58.236 "superblock": true, 00:15:58.236 "num_base_bdevs": 4, 00:15:58.236 "num_base_bdevs_discovered": 1, 00:15:58.236 "num_base_bdevs_operational": 4, 00:15:58.236 "base_bdevs_list": [ 00:15:58.236 { 00:15:58.236 "name": "pt1", 00:15:58.236 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:58.236 "is_configured": true, 00:15:58.236 "data_offset": 2048, 00:15:58.236 "data_size": 63488 00:15:58.236 }, 00:15:58.236 { 00:15:58.236 "name": null, 00:15:58.236 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:58.236 "is_configured": false, 00:15:58.236 "data_offset": 2048, 00:15:58.236 "data_size": 63488 00:15:58.236 }, 00:15:58.236 { 00:15:58.236 "name": null, 00:15:58.236 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:58.236 "is_configured": false, 00:15:58.236 "data_offset": 2048, 00:15:58.236 "data_size": 63488 00:15:58.236 }, 00:15:58.236 { 00:15:58.236 "name": null, 00:15:58.236 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:58.236 "is_configured": false, 00:15:58.236 "data_offset": 2048, 00:15:58.236 "data_size": 63488 00:15:58.236 } 00:15:58.236 ] 00:15:58.236 }' 00:15:58.236 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.236 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.827 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:58.827 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:58.827 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.827 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.827 [2024-12-06 09:53:23.842651] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:58.827 [2024-12-06 09:53:23.842728] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.827 [2024-12-06 09:53:23.842748] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:58.827 [2024-12-06 09:53:23.842759] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.827 [2024-12-06 09:53:23.843194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.827 [2024-12-06 09:53:23.843215] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:58.827 [2024-12-06 09:53:23.843293] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:58.827 [2024-12-06 09:53:23.843317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:58.827 pt2 00:15:58.827 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.827 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:58.827 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.827 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.827 [2024-12-06 09:53:23.854624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:58.827 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.827 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:58.827 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.827 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:58.827 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.827 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.827 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:58.827 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.828 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.828 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.828 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.828 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.828 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.828 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.828 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.828 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.828 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.828 "name": "raid_bdev1", 00:15:58.828 "uuid": "54ec549e-d1c4-41a8-94a5-54407b57afa2", 00:15:58.828 "strip_size_kb": 64, 00:15:58.828 "state": "configuring", 00:15:58.828 "raid_level": "raid5f", 00:15:58.828 "superblock": true, 00:15:58.828 "num_base_bdevs": 4, 00:15:58.828 "num_base_bdevs_discovered": 1, 00:15:58.828 "num_base_bdevs_operational": 4, 00:15:58.828 "base_bdevs_list": [ 00:15:58.828 { 00:15:58.828 "name": "pt1", 00:15:58.828 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:58.828 "is_configured": true, 00:15:58.828 "data_offset": 2048, 00:15:58.828 "data_size": 63488 00:15:58.828 }, 00:15:58.828 { 00:15:58.828 "name": null, 00:15:58.828 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:58.828 "is_configured": false, 00:15:58.828 "data_offset": 0, 00:15:58.828 "data_size": 63488 00:15:58.828 }, 00:15:58.828 { 00:15:58.828 "name": null, 00:15:58.828 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:58.828 "is_configured": false, 00:15:58.828 "data_offset": 2048, 00:15:58.828 "data_size": 63488 00:15:58.828 }, 00:15:58.828 { 00:15:58.828 "name": null, 00:15:58.828 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:58.828 "is_configured": false, 00:15:58.828 "data_offset": 2048, 00:15:58.828 "data_size": 63488 00:15:58.828 } 00:15:58.828 ] 00:15:58.828 }' 00:15:58.828 09:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.828 09:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.088 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:59.088 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:59.088 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:59.088 09:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.088 09:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.088 [2024-12-06 09:53:24.345785] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:59.088 [2024-12-06 09:53:24.345899] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.088 [2024-12-06 09:53:24.345937] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:59.088 [2024-12-06 09:53:24.345964] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.088 [2024-12-06 09:53:24.346457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.088 [2024-12-06 09:53:24.346514] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:59.088 [2024-12-06 09:53:24.346624] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:59.088 [2024-12-06 09:53:24.346673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:59.088 pt2 00:15:59.088 09:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.088 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:59.088 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:59.088 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:59.088 09:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.088 09:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.088 [2024-12-06 09:53:24.357738] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:59.088 [2024-12-06 09:53:24.357823] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.088 [2024-12-06 09:53:24.357863] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:59.088 [2024-12-06 09:53:24.357893] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.088 [2024-12-06 09:53:24.358275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.088 [2024-12-06 09:53:24.358329] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:59.088 [2024-12-06 09:53:24.358419] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:59.088 [2024-12-06 09:53:24.358474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:59.347 pt3 00:15:59.347 09:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.347 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:59.347 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:59.347 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:59.347 09:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.347 09:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.347 [2024-12-06 09:53:24.369694] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:59.347 [2024-12-06 09:53:24.369736] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.347 [2024-12-06 09:53:24.369752] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:59.347 [2024-12-06 09:53:24.369759] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.347 [2024-12-06 09:53:24.370136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.347 [2024-12-06 09:53:24.370151] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:59.347 [2024-12-06 09:53:24.370233] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:59.347 [2024-12-06 09:53:24.370253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:59.347 [2024-12-06 09:53:24.370386] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:59.347 [2024-12-06 09:53:24.370394] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:59.347 [2024-12-06 09:53:24.370635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:59.347 [2024-12-06 09:53:24.377847] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:59.347 [2024-12-06 09:53:24.377871] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:59.347 [2024-12-06 09:53:24.378031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.347 pt4 00:15:59.347 09:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.347 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:59.347 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:59.347 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:59.347 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.347 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.347 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.347 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.347 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:59.347 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.347 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.347 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.347 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.347 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.347 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.347 09:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.347 09:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.347 09:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.347 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.347 "name": "raid_bdev1", 00:15:59.347 "uuid": "54ec549e-d1c4-41a8-94a5-54407b57afa2", 00:15:59.347 "strip_size_kb": 64, 00:15:59.347 "state": "online", 00:15:59.347 "raid_level": "raid5f", 00:15:59.347 "superblock": true, 00:15:59.347 "num_base_bdevs": 4, 00:15:59.347 "num_base_bdevs_discovered": 4, 00:15:59.347 "num_base_bdevs_operational": 4, 00:15:59.347 "base_bdevs_list": [ 00:15:59.347 { 00:15:59.347 "name": "pt1", 00:15:59.347 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:59.347 "is_configured": true, 00:15:59.347 "data_offset": 2048, 00:15:59.347 "data_size": 63488 00:15:59.347 }, 00:15:59.347 { 00:15:59.347 "name": "pt2", 00:15:59.347 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:59.347 "is_configured": true, 00:15:59.347 "data_offset": 2048, 00:15:59.347 "data_size": 63488 00:15:59.347 }, 00:15:59.347 { 00:15:59.347 "name": "pt3", 00:15:59.347 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:59.347 "is_configured": true, 00:15:59.347 "data_offset": 2048, 00:15:59.347 "data_size": 63488 00:15:59.347 }, 00:15:59.347 { 00:15:59.347 "name": "pt4", 00:15:59.347 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:59.347 "is_configured": true, 00:15:59.347 "data_offset": 2048, 00:15:59.347 "data_size": 63488 00:15:59.347 } 00:15:59.347 ] 00:15:59.347 }' 00:15:59.347 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.347 09:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.607 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:59.607 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:59.607 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:59.607 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:59.607 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:59.607 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:59.607 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:59.607 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:59.607 09:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.607 09:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.607 [2024-12-06 09:53:24.814246] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.607 09:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.607 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:59.607 "name": "raid_bdev1", 00:15:59.607 "aliases": [ 00:15:59.607 "54ec549e-d1c4-41a8-94a5-54407b57afa2" 00:15:59.607 ], 00:15:59.607 "product_name": "Raid Volume", 00:15:59.607 "block_size": 512, 00:15:59.607 "num_blocks": 190464, 00:15:59.607 "uuid": "54ec549e-d1c4-41a8-94a5-54407b57afa2", 00:15:59.607 "assigned_rate_limits": { 00:15:59.607 "rw_ios_per_sec": 0, 00:15:59.607 "rw_mbytes_per_sec": 0, 00:15:59.607 "r_mbytes_per_sec": 0, 00:15:59.607 "w_mbytes_per_sec": 0 00:15:59.607 }, 00:15:59.607 "claimed": false, 00:15:59.607 "zoned": false, 00:15:59.607 "supported_io_types": { 00:15:59.607 "read": true, 00:15:59.607 "write": true, 00:15:59.607 "unmap": false, 00:15:59.607 "flush": false, 00:15:59.607 "reset": true, 00:15:59.607 "nvme_admin": false, 00:15:59.607 "nvme_io": false, 00:15:59.607 "nvme_io_md": false, 00:15:59.607 "write_zeroes": true, 00:15:59.607 "zcopy": false, 00:15:59.607 "get_zone_info": false, 00:15:59.607 "zone_management": false, 00:15:59.607 "zone_append": false, 00:15:59.607 "compare": false, 00:15:59.607 "compare_and_write": false, 00:15:59.607 "abort": false, 00:15:59.607 "seek_hole": false, 00:15:59.607 "seek_data": false, 00:15:59.607 "copy": false, 00:15:59.607 "nvme_iov_md": false 00:15:59.607 }, 00:15:59.607 "driver_specific": { 00:15:59.607 "raid": { 00:15:59.607 "uuid": "54ec549e-d1c4-41a8-94a5-54407b57afa2", 00:15:59.607 "strip_size_kb": 64, 00:15:59.607 "state": "online", 00:15:59.607 "raid_level": "raid5f", 00:15:59.607 "superblock": true, 00:15:59.607 "num_base_bdevs": 4, 00:15:59.607 "num_base_bdevs_discovered": 4, 00:15:59.607 "num_base_bdevs_operational": 4, 00:15:59.607 "base_bdevs_list": [ 00:15:59.607 { 00:15:59.607 "name": "pt1", 00:15:59.607 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:59.607 "is_configured": true, 00:15:59.607 "data_offset": 2048, 00:15:59.607 "data_size": 63488 00:15:59.607 }, 00:15:59.607 { 00:15:59.607 "name": "pt2", 00:15:59.607 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:59.607 "is_configured": true, 00:15:59.607 "data_offset": 2048, 00:15:59.607 "data_size": 63488 00:15:59.607 }, 00:15:59.607 { 00:15:59.607 "name": "pt3", 00:15:59.607 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:59.607 "is_configured": true, 00:15:59.607 "data_offset": 2048, 00:15:59.607 "data_size": 63488 00:15:59.607 }, 00:15:59.607 { 00:15:59.607 "name": "pt4", 00:15:59.607 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:59.607 "is_configured": true, 00:15:59.607 "data_offset": 2048, 00:15:59.607 "data_size": 63488 00:15:59.607 } 00:15:59.607 ] 00:15:59.607 } 00:15:59.607 } 00:15:59.607 }' 00:15:59.607 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:59.866 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:59.866 pt2 00:15:59.866 pt3 00:15:59.866 pt4' 00:15:59.866 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.866 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:59.866 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.866 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:59.866 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.866 09:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.866 09:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.866 09:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.866 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:59.866 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:59.866 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.866 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:59.866 09:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.866 09:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.866 09:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.866 09:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.866 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:59.866 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:59.866 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.866 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:59.866 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.866 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.866 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.866 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.866 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:59.866 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:59.866 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.866 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:59.866 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.866 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.866 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.866 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.866 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:59.866 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:59.866 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:59.866 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:59.866 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.866 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.866 [2024-12-06 09:53:25.125626] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:00.124 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.124 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 54ec549e-d1c4-41a8-94a5-54407b57afa2 '!=' 54ec549e-d1c4-41a8-94a5-54407b57afa2 ']' 00:16:00.124 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:00.124 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:00.124 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:00.124 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:00.124 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.124 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.124 [2024-12-06 09:53:25.177439] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:00.124 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.124 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:00.124 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.124 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.124 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.124 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.124 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:00.124 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.124 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.124 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.124 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.124 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.124 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.124 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.124 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.124 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.124 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.124 "name": "raid_bdev1", 00:16:00.124 "uuid": "54ec549e-d1c4-41a8-94a5-54407b57afa2", 00:16:00.124 "strip_size_kb": 64, 00:16:00.124 "state": "online", 00:16:00.124 "raid_level": "raid5f", 00:16:00.124 "superblock": true, 00:16:00.124 "num_base_bdevs": 4, 00:16:00.124 "num_base_bdevs_discovered": 3, 00:16:00.124 "num_base_bdevs_operational": 3, 00:16:00.124 "base_bdevs_list": [ 00:16:00.124 { 00:16:00.124 "name": null, 00:16:00.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.124 "is_configured": false, 00:16:00.124 "data_offset": 0, 00:16:00.124 "data_size": 63488 00:16:00.124 }, 00:16:00.125 { 00:16:00.125 "name": "pt2", 00:16:00.125 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:00.125 "is_configured": true, 00:16:00.125 "data_offset": 2048, 00:16:00.125 "data_size": 63488 00:16:00.125 }, 00:16:00.125 { 00:16:00.125 "name": "pt3", 00:16:00.125 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:00.125 "is_configured": true, 00:16:00.125 "data_offset": 2048, 00:16:00.125 "data_size": 63488 00:16:00.125 }, 00:16:00.125 { 00:16:00.125 "name": "pt4", 00:16:00.125 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:00.125 "is_configured": true, 00:16:00.125 "data_offset": 2048, 00:16:00.125 "data_size": 63488 00:16:00.125 } 00:16:00.125 ] 00:16:00.125 }' 00:16:00.125 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.125 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.382 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:00.382 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.382 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.382 [2024-12-06 09:53:25.568735] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:00.382 [2024-12-06 09:53:25.568819] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:00.382 [2024-12-06 09:53:25.568915] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.382 [2024-12-06 09:53:25.569005] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:00.382 [2024-12-06 09:53:25.569046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:00.382 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.382 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.382 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.382 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.382 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:00.382 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.382 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:00.382 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:00.382 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:00.382 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:00.382 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:00.382 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.382 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.382 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.382 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:00.382 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:00.382 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:00.382 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.382 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.382 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.382 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:00.382 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:00.382 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:00.382 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.382 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.641 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.641 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:00.641 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:00.641 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:00.641 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:00.641 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:00.641 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.641 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.641 [2024-12-06 09:53:25.664552] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:00.641 [2024-12-06 09:53:25.664605] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.641 [2024-12-06 09:53:25.664623] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:00.641 [2024-12-06 09:53:25.664632] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.641 [2024-12-06 09:53:25.666803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.641 [2024-12-06 09:53:25.666840] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:00.641 [2024-12-06 09:53:25.666935] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:00.641 [2024-12-06 09:53:25.666980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:00.641 pt2 00:16:00.641 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.641 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:00.641 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.641 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:00.641 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.641 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.641 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:00.641 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.641 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.641 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.641 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.641 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.641 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.641 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.641 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.641 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.641 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.641 "name": "raid_bdev1", 00:16:00.641 "uuid": "54ec549e-d1c4-41a8-94a5-54407b57afa2", 00:16:00.641 "strip_size_kb": 64, 00:16:00.641 "state": "configuring", 00:16:00.641 "raid_level": "raid5f", 00:16:00.641 "superblock": true, 00:16:00.641 "num_base_bdevs": 4, 00:16:00.641 "num_base_bdevs_discovered": 1, 00:16:00.641 "num_base_bdevs_operational": 3, 00:16:00.641 "base_bdevs_list": [ 00:16:00.641 { 00:16:00.641 "name": null, 00:16:00.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.641 "is_configured": false, 00:16:00.641 "data_offset": 2048, 00:16:00.641 "data_size": 63488 00:16:00.641 }, 00:16:00.641 { 00:16:00.641 "name": "pt2", 00:16:00.641 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:00.641 "is_configured": true, 00:16:00.641 "data_offset": 2048, 00:16:00.641 "data_size": 63488 00:16:00.641 }, 00:16:00.641 { 00:16:00.641 "name": null, 00:16:00.641 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:00.641 "is_configured": false, 00:16:00.641 "data_offset": 2048, 00:16:00.641 "data_size": 63488 00:16:00.641 }, 00:16:00.641 { 00:16:00.641 "name": null, 00:16:00.641 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:00.641 "is_configured": false, 00:16:00.641 "data_offset": 2048, 00:16:00.641 "data_size": 63488 00:16:00.641 } 00:16:00.641 ] 00:16:00.641 }' 00:16:00.641 09:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.641 09:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.899 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:00.899 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:00.899 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:00.899 09:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.899 09:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.899 [2024-12-06 09:53:26.083886] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:00.899 [2024-12-06 09:53:26.084008] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.899 [2024-12-06 09:53:26.084052] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:00.899 [2024-12-06 09:53:26.084083] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.899 [2024-12-06 09:53:26.084582] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.899 [2024-12-06 09:53:26.084640] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:00.899 [2024-12-06 09:53:26.084752] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:00.899 [2024-12-06 09:53:26.084802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:00.899 pt3 00:16:00.899 09:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.899 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:00.899 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.899 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:00.899 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.899 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.899 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:00.899 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.899 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.899 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.899 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.899 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.899 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.899 09:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.899 09:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.899 09:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.899 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.899 "name": "raid_bdev1", 00:16:00.899 "uuid": "54ec549e-d1c4-41a8-94a5-54407b57afa2", 00:16:00.899 "strip_size_kb": 64, 00:16:00.899 "state": "configuring", 00:16:00.899 "raid_level": "raid5f", 00:16:00.899 "superblock": true, 00:16:00.899 "num_base_bdevs": 4, 00:16:00.899 "num_base_bdevs_discovered": 2, 00:16:00.899 "num_base_bdevs_operational": 3, 00:16:00.899 "base_bdevs_list": [ 00:16:00.899 { 00:16:00.899 "name": null, 00:16:00.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.899 "is_configured": false, 00:16:00.899 "data_offset": 2048, 00:16:00.899 "data_size": 63488 00:16:00.899 }, 00:16:00.899 { 00:16:00.899 "name": "pt2", 00:16:00.899 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:00.899 "is_configured": true, 00:16:00.899 "data_offset": 2048, 00:16:00.899 "data_size": 63488 00:16:00.899 }, 00:16:00.899 { 00:16:00.899 "name": "pt3", 00:16:00.899 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:00.899 "is_configured": true, 00:16:00.899 "data_offset": 2048, 00:16:00.899 "data_size": 63488 00:16:00.899 }, 00:16:00.899 { 00:16:00.899 "name": null, 00:16:00.899 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:00.899 "is_configured": false, 00:16:00.899 "data_offset": 2048, 00:16:00.899 "data_size": 63488 00:16:00.899 } 00:16:00.899 ] 00:16:00.899 }' 00:16:00.899 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.899 09:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.467 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:01.467 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:01.467 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:01.467 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:01.467 09:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.467 09:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.467 [2024-12-06 09:53:26.531126] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:01.467 [2024-12-06 09:53:26.531196] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.467 [2024-12-06 09:53:26.531218] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:01.467 [2024-12-06 09:53:26.531227] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.467 [2024-12-06 09:53:26.531662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.467 [2024-12-06 09:53:26.531683] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:01.467 [2024-12-06 09:53:26.531765] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:01.467 [2024-12-06 09:53:26.531796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:01.467 [2024-12-06 09:53:26.531962] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:01.467 [2024-12-06 09:53:26.531976] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:01.467 [2024-12-06 09:53:26.532250] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:01.467 pt4 00:16:01.467 [2024-12-06 09:53:26.538742] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:01.467 [2024-12-06 09:53:26.538766] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:01.467 [2024-12-06 09:53:26.539049] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.467 09:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.467 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:01.467 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.467 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.468 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.468 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.468 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:01.468 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.468 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.468 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.468 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.468 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.468 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.468 09:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.468 09:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.468 09:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.468 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.468 "name": "raid_bdev1", 00:16:01.468 "uuid": "54ec549e-d1c4-41a8-94a5-54407b57afa2", 00:16:01.468 "strip_size_kb": 64, 00:16:01.468 "state": "online", 00:16:01.468 "raid_level": "raid5f", 00:16:01.468 "superblock": true, 00:16:01.468 "num_base_bdevs": 4, 00:16:01.468 "num_base_bdevs_discovered": 3, 00:16:01.468 "num_base_bdevs_operational": 3, 00:16:01.468 "base_bdevs_list": [ 00:16:01.468 { 00:16:01.468 "name": null, 00:16:01.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.468 "is_configured": false, 00:16:01.468 "data_offset": 2048, 00:16:01.468 "data_size": 63488 00:16:01.468 }, 00:16:01.468 { 00:16:01.468 "name": "pt2", 00:16:01.468 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:01.468 "is_configured": true, 00:16:01.468 "data_offset": 2048, 00:16:01.468 "data_size": 63488 00:16:01.468 }, 00:16:01.468 { 00:16:01.468 "name": "pt3", 00:16:01.468 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:01.468 "is_configured": true, 00:16:01.468 "data_offset": 2048, 00:16:01.468 "data_size": 63488 00:16:01.468 }, 00:16:01.468 { 00:16:01.468 "name": "pt4", 00:16:01.468 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:01.468 "is_configured": true, 00:16:01.468 "data_offset": 2048, 00:16:01.468 "data_size": 63488 00:16:01.468 } 00:16:01.468 ] 00:16:01.468 }' 00:16:01.468 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.468 09:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.727 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:01.727 09:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.727 09:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.727 [2024-12-06 09:53:26.931507] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:01.727 [2024-12-06 09:53:26.931533] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:01.727 [2024-12-06 09:53:26.931599] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:01.727 [2024-12-06 09:53:26.931666] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:01.727 [2024-12-06 09:53:26.931677] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:01.727 09:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.727 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:01.727 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.727 09:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.727 09:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.727 09:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.727 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:01.727 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:01.727 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:01.727 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:01.727 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:01.727 09:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.727 09:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.986 09:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.986 09:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:01.986 09:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.986 09:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.986 [2024-12-06 09:53:27.007368] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:01.986 [2024-12-06 09:53:27.007425] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.986 [2024-12-06 09:53:27.007450] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:01.986 [2024-12-06 09:53:27.007463] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.986 [2024-12-06 09:53:27.009653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.986 [2024-12-06 09:53:27.009696] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:01.986 [2024-12-06 09:53:27.009783] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:01.986 [2024-12-06 09:53:27.009834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:01.986 [2024-12-06 09:53:27.009967] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:01.986 [2024-12-06 09:53:27.009979] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:01.986 [2024-12-06 09:53:27.009992] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:01.986 [2024-12-06 09:53:27.010046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:01.986 [2024-12-06 09:53:27.010156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:01.986 pt1 00:16:01.986 09:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.986 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:01.986 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:01.986 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.986 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:01.986 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.986 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.986 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:01.986 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.986 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.986 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.986 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.986 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.986 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.986 09:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.986 09:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.986 09:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.986 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.986 "name": "raid_bdev1", 00:16:01.986 "uuid": "54ec549e-d1c4-41a8-94a5-54407b57afa2", 00:16:01.986 "strip_size_kb": 64, 00:16:01.986 "state": "configuring", 00:16:01.986 "raid_level": "raid5f", 00:16:01.986 "superblock": true, 00:16:01.986 "num_base_bdevs": 4, 00:16:01.986 "num_base_bdevs_discovered": 2, 00:16:01.986 "num_base_bdevs_operational": 3, 00:16:01.986 "base_bdevs_list": [ 00:16:01.986 { 00:16:01.986 "name": null, 00:16:01.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.986 "is_configured": false, 00:16:01.986 "data_offset": 2048, 00:16:01.986 "data_size": 63488 00:16:01.986 }, 00:16:01.986 { 00:16:01.986 "name": "pt2", 00:16:01.986 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:01.986 "is_configured": true, 00:16:01.986 "data_offset": 2048, 00:16:01.986 "data_size": 63488 00:16:01.986 }, 00:16:01.986 { 00:16:01.986 "name": "pt3", 00:16:01.986 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:01.986 "is_configured": true, 00:16:01.986 "data_offset": 2048, 00:16:01.986 "data_size": 63488 00:16:01.986 }, 00:16:01.986 { 00:16:01.986 "name": null, 00:16:01.986 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:01.986 "is_configured": false, 00:16:01.986 "data_offset": 2048, 00:16:01.986 "data_size": 63488 00:16:01.986 } 00:16:01.986 ] 00:16:01.986 }' 00:16:01.986 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.986 09:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.252 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:02.252 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:02.252 09:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.252 09:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.252 09:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.252 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:02.252 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:02.252 09:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.252 09:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.252 [2024-12-06 09:53:27.490581] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:02.252 [2024-12-06 09:53:27.490693] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.252 [2024-12-06 09:53:27.490734] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:02.252 [2024-12-06 09:53:27.490762] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.252 [2024-12-06 09:53:27.491290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.252 [2024-12-06 09:53:27.491354] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:02.252 [2024-12-06 09:53:27.491477] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:02.252 [2024-12-06 09:53:27.491533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:02.252 [2024-12-06 09:53:27.491716] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:02.253 [2024-12-06 09:53:27.491760] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:02.253 [2024-12-06 09:53:27.492063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:02.253 [2024-12-06 09:53:27.500036] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:02.253 [2024-12-06 09:53:27.500098] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:02.253 [2024-12-06 09:53:27.500385] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.253 pt4 00:16:02.253 09:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.253 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:02.253 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.253 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.253 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.253 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.253 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:02.253 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.253 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.253 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.253 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.253 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.253 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.253 09:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.253 09:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.516 09:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.516 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.516 "name": "raid_bdev1", 00:16:02.516 "uuid": "54ec549e-d1c4-41a8-94a5-54407b57afa2", 00:16:02.516 "strip_size_kb": 64, 00:16:02.516 "state": "online", 00:16:02.516 "raid_level": "raid5f", 00:16:02.516 "superblock": true, 00:16:02.516 "num_base_bdevs": 4, 00:16:02.516 "num_base_bdevs_discovered": 3, 00:16:02.516 "num_base_bdevs_operational": 3, 00:16:02.516 "base_bdevs_list": [ 00:16:02.516 { 00:16:02.516 "name": null, 00:16:02.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.516 "is_configured": false, 00:16:02.516 "data_offset": 2048, 00:16:02.516 "data_size": 63488 00:16:02.516 }, 00:16:02.516 { 00:16:02.516 "name": "pt2", 00:16:02.516 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:02.516 "is_configured": true, 00:16:02.516 "data_offset": 2048, 00:16:02.516 "data_size": 63488 00:16:02.516 }, 00:16:02.516 { 00:16:02.516 "name": "pt3", 00:16:02.516 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:02.516 "is_configured": true, 00:16:02.516 "data_offset": 2048, 00:16:02.516 "data_size": 63488 00:16:02.516 }, 00:16:02.516 { 00:16:02.516 "name": "pt4", 00:16:02.516 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:02.516 "is_configured": true, 00:16:02.516 "data_offset": 2048, 00:16:02.516 "data_size": 63488 00:16:02.516 } 00:16:02.516 ] 00:16:02.516 }' 00:16:02.516 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.516 09:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.774 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:02.774 09:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.774 09:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.774 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:02.774 09:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.774 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:02.774 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:02.774 09:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:02.774 09:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.774 09:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.774 [2024-12-06 09:53:27.977748] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:02.774 09:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.774 09:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 54ec549e-d1c4-41a8-94a5-54407b57afa2 '!=' 54ec549e-d1c4-41a8-94a5-54407b57afa2 ']' 00:16:02.774 09:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83971 00:16:02.774 09:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 83971 ']' 00:16:02.774 09:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 83971 00:16:02.774 09:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:02.774 09:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:02.774 09:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83971 00:16:03.033 killing process with pid 83971 00:16:03.033 09:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:03.033 09:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:03.033 09:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83971' 00:16:03.033 09:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 83971 00:16:03.033 [2024-12-06 09:53:28.055826] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:03.033 [2024-12-06 09:53:28.055919] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:03.033 [2024-12-06 09:53:28.055992] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:03.033 [2024-12-06 09:53:28.056007] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:03.033 09:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 83971 00:16:03.292 [2024-12-06 09:53:28.430367] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:04.672 09:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:04.672 00:16:04.672 real 0m8.358s 00:16:04.672 user 0m13.125s 00:16:04.672 sys 0m1.568s 00:16:04.672 ************************************ 00:16:04.672 END TEST raid5f_superblock_test 00:16:04.672 ************************************ 00:16:04.672 09:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:04.672 09:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.672 09:53:29 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:04.672 09:53:29 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:16:04.672 09:53:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:04.672 09:53:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:04.672 09:53:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:04.672 ************************************ 00:16:04.672 START TEST raid5f_rebuild_test 00:16:04.672 ************************************ 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84457 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84457 00:16:04.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84457 ']' 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:04.672 09:53:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.672 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:04.672 Zero copy mechanism will not be used. 00:16:04.672 [2024-12-06 09:53:29.715369] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:16:04.672 [2024-12-06 09:53:29.715489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84457 ] 00:16:04.672 [2024-12-06 09:53:29.891572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.931 [2024-12-06 09:53:30.006666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.190 [2024-12-06 09:53:30.204158] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:05.190 [2024-12-06 09:53:30.204263] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:05.450 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:05.450 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:05.450 09:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:05.450 09:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:05.450 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.450 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.450 BaseBdev1_malloc 00:16:05.450 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.450 09:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:05.450 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.450 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.450 [2024-12-06 09:53:30.572075] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:05.450 [2024-12-06 09:53:30.572193] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.450 [2024-12-06 09:53:30.572234] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:05.450 [2024-12-06 09:53:30.572265] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.450 [2024-12-06 09:53:30.574388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.450 [2024-12-06 09:53:30.574489] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:05.450 BaseBdev1 00:16:05.450 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.450 09:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:05.450 09:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:05.450 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.450 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.450 BaseBdev2_malloc 00:16:05.450 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.450 09:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:05.450 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.450 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.450 [2024-12-06 09:53:30.626159] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:05.450 [2024-12-06 09:53:30.626254] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.450 [2024-12-06 09:53:30.626294] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:05.450 [2024-12-06 09:53:30.626324] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.450 [2024-12-06 09:53:30.628379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.450 [2024-12-06 09:53:30.628461] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:05.450 BaseBdev2 00:16:05.450 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.450 09:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:05.450 09:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:05.450 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.450 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.450 BaseBdev3_malloc 00:16:05.450 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.450 09:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:05.450 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.450 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.450 [2024-12-06 09:53:30.696858] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:05.450 [2024-12-06 09:53:30.696963] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.450 [2024-12-06 09:53:30.697001] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:05.450 [2024-12-06 09:53:30.697030] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.450 [2024-12-06 09:53:30.699011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.450 [2024-12-06 09:53:30.699090] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:05.450 BaseBdev3 00:16:05.450 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.450 09:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:05.450 09:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:05.450 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.450 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.710 BaseBdev4_malloc 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.710 [2024-12-06 09:53:30.751375] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:05.710 [2024-12-06 09:53:30.751431] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.710 [2024-12-06 09:53:30.751449] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:05.710 [2024-12-06 09:53:30.751460] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.710 [2024-12-06 09:53:30.753472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.710 [2024-12-06 09:53:30.753509] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:05.710 BaseBdev4 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.710 spare_malloc 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.710 spare_delay 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.710 [2024-12-06 09:53:30.809464] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:05.710 [2024-12-06 09:53:30.809516] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.710 [2024-12-06 09:53:30.809545] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:05.710 [2024-12-06 09:53:30.809556] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.710 [2024-12-06 09:53:30.811658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.710 [2024-12-06 09:53:30.811695] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:05.710 spare 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.710 [2024-12-06 09:53:30.817495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:05.710 [2024-12-06 09:53:30.819294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:05.710 [2024-12-06 09:53:30.819356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:05.710 [2024-12-06 09:53:30.819407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:05.710 [2024-12-06 09:53:30.819495] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:05.710 [2024-12-06 09:53:30.819514] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:05.710 [2024-12-06 09:53:30.819760] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:05.710 [2024-12-06 09:53:30.826682] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:05.710 [2024-12-06 09:53:30.826701] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:05.710 [2024-12-06 09:53:30.826872] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.710 "name": "raid_bdev1", 00:16:05.710 "uuid": "38b30e58-9b2e-4ddd-89a4-9e32878df25c", 00:16:05.710 "strip_size_kb": 64, 00:16:05.710 "state": "online", 00:16:05.710 "raid_level": "raid5f", 00:16:05.710 "superblock": false, 00:16:05.710 "num_base_bdevs": 4, 00:16:05.710 "num_base_bdevs_discovered": 4, 00:16:05.710 "num_base_bdevs_operational": 4, 00:16:05.710 "base_bdevs_list": [ 00:16:05.710 { 00:16:05.710 "name": "BaseBdev1", 00:16:05.710 "uuid": "f9bbcf73-6340-5791-a71b-5201ec975d58", 00:16:05.710 "is_configured": true, 00:16:05.710 "data_offset": 0, 00:16:05.710 "data_size": 65536 00:16:05.710 }, 00:16:05.710 { 00:16:05.710 "name": "BaseBdev2", 00:16:05.710 "uuid": "724529d8-ce21-5aa2-a000-58dd564c2bb0", 00:16:05.710 "is_configured": true, 00:16:05.710 "data_offset": 0, 00:16:05.710 "data_size": 65536 00:16:05.710 }, 00:16:05.710 { 00:16:05.710 "name": "BaseBdev3", 00:16:05.710 "uuid": "e27c0ad5-cd53-59d8-98ec-27f382896410", 00:16:05.710 "is_configured": true, 00:16:05.710 "data_offset": 0, 00:16:05.710 "data_size": 65536 00:16:05.710 }, 00:16:05.710 { 00:16:05.710 "name": "BaseBdev4", 00:16:05.710 "uuid": "32a78bf4-a9d4-5d4e-a824-3e79e626ad22", 00:16:05.710 "is_configured": true, 00:16:05.710 "data_offset": 0, 00:16:05.710 "data_size": 65536 00:16:05.710 } 00:16:05.710 ] 00:16:05.710 }' 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.710 09:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.969 09:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:05.969 09:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.969 09:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.969 09:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:05.969 [2024-12-06 09:53:31.238746] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:06.228 09:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.228 09:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:16:06.228 09:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:06.228 09:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.228 09:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.228 09:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.228 09:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.228 09:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:06.228 09:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:06.228 09:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:06.228 09:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:06.228 09:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:06.228 09:53:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:06.228 09:53:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:06.228 09:53:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:06.228 09:53:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:06.228 09:53:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:06.228 09:53:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:06.228 09:53:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:06.228 09:53:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:06.228 09:53:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:06.228 [2024-12-06 09:53:31.494066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:06.488 /dev/nbd0 00:16:06.488 09:53:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:06.488 09:53:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:06.488 09:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:06.488 09:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:06.488 09:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:06.488 09:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:06.488 09:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:06.488 09:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:06.488 09:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:06.488 09:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:06.488 09:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:06.488 1+0 records in 00:16:06.488 1+0 records out 00:16:06.488 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309651 s, 13.2 MB/s 00:16:06.488 09:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.488 09:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:06.488 09:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.488 09:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:06.488 09:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:06.488 09:53:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:06.488 09:53:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:06.488 09:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:06.488 09:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:06.488 09:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:06.488 09:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:16:07.057 512+0 records in 00:16:07.057 512+0 records out 00:16:07.057 100663296 bytes (101 MB, 96 MiB) copied, 0.512147 s, 197 MB/s 00:16:07.057 09:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:07.057 09:53:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:07.057 09:53:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:07.057 09:53:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:07.057 09:53:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:07.057 09:53:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:07.057 09:53:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:07.057 09:53:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:07.057 [2024-12-06 09:53:32.291526] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.057 09:53:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:07.057 09:53:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:07.057 09:53:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:07.057 09:53:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:07.057 09:53:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:07.057 09:53:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:07.057 09:53:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:07.057 09:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:07.057 09:53:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.057 09:53:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.057 [2024-12-06 09:53:32.305618] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:07.057 09:53:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.057 09:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:07.057 09:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.057 09:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.057 09:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.057 09:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.057 09:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:07.057 09:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.057 09:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.058 09:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.058 09:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.058 09:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.058 09:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.058 09:53:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.058 09:53:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.317 09:53:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.317 09:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.317 "name": "raid_bdev1", 00:16:07.317 "uuid": "38b30e58-9b2e-4ddd-89a4-9e32878df25c", 00:16:07.317 "strip_size_kb": 64, 00:16:07.317 "state": "online", 00:16:07.317 "raid_level": "raid5f", 00:16:07.317 "superblock": false, 00:16:07.317 "num_base_bdevs": 4, 00:16:07.317 "num_base_bdevs_discovered": 3, 00:16:07.317 "num_base_bdevs_operational": 3, 00:16:07.317 "base_bdevs_list": [ 00:16:07.317 { 00:16:07.317 "name": null, 00:16:07.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.317 "is_configured": false, 00:16:07.317 "data_offset": 0, 00:16:07.317 "data_size": 65536 00:16:07.317 }, 00:16:07.317 { 00:16:07.317 "name": "BaseBdev2", 00:16:07.317 "uuid": "724529d8-ce21-5aa2-a000-58dd564c2bb0", 00:16:07.317 "is_configured": true, 00:16:07.317 "data_offset": 0, 00:16:07.317 "data_size": 65536 00:16:07.317 }, 00:16:07.317 { 00:16:07.317 "name": "BaseBdev3", 00:16:07.317 "uuid": "e27c0ad5-cd53-59d8-98ec-27f382896410", 00:16:07.317 "is_configured": true, 00:16:07.317 "data_offset": 0, 00:16:07.317 "data_size": 65536 00:16:07.317 }, 00:16:07.317 { 00:16:07.317 "name": "BaseBdev4", 00:16:07.317 "uuid": "32a78bf4-a9d4-5d4e-a824-3e79e626ad22", 00:16:07.317 "is_configured": true, 00:16:07.317 "data_offset": 0, 00:16:07.317 "data_size": 65536 00:16:07.317 } 00:16:07.317 ] 00:16:07.317 }' 00:16:07.317 09:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.317 09:53:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.576 09:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:07.576 09:53:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.576 09:53:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.576 [2024-12-06 09:53:32.772828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:07.576 [2024-12-06 09:53:32.788603] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:07.576 09:53:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.576 09:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:07.576 [2024-12-06 09:53:32.798221] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:08.956 09:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.956 09:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.956 09:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.956 09:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.956 09:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.956 09:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.956 09:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.956 09:53:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.956 09:53:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.956 09:53:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.956 09:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.956 "name": "raid_bdev1", 00:16:08.956 "uuid": "38b30e58-9b2e-4ddd-89a4-9e32878df25c", 00:16:08.956 "strip_size_kb": 64, 00:16:08.956 "state": "online", 00:16:08.956 "raid_level": "raid5f", 00:16:08.956 "superblock": false, 00:16:08.956 "num_base_bdevs": 4, 00:16:08.956 "num_base_bdevs_discovered": 4, 00:16:08.956 "num_base_bdevs_operational": 4, 00:16:08.956 "process": { 00:16:08.956 "type": "rebuild", 00:16:08.956 "target": "spare", 00:16:08.956 "progress": { 00:16:08.956 "blocks": 19200, 00:16:08.957 "percent": 9 00:16:08.957 } 00:16:08.957 }, 00:16:08.957 "base_bdevs_list": [ 00:16:08.957 { 00:16:08.957 "name": "spare", 00:16:08.957 "uuid": "863e3c31-2cbe-504a-ab2f-b34c0f13e0d9", 00:16:08.957 "is_configured": true, 00:16:08.957 "data_offset": 0, 00:16:08.957 "data_size": 65536 00:16:08.957 }, 00:16:08.957 { 00:16:08.957 "name": "BaseBdev2", 00:16:08.957 "uuid": "724529d8-ce21-5aa2-a000-58dd564c2bb0", 00:16:08.957 "is_configured": true, 00:16:08.957 "data_offset": 0, 00:16:08.957 "data_size": 65536 00:16:08.957 }, 00:16:08.957 { 00:16:08.957 "name": "BaseBdev3", 00:16:08.957 "uuid": "e27c0ad5-cd53-59d8-98ec-27f382896410", 00:16:08.957 "is_configured": true, 00:16:08.957 "data_offset": 0, 00:16:08.957 "data_size": 65536 00:16:08.957 }, 00:16:08.957 { 00:16:08.957 "name": "BaseBdev4", 00:16:08.957 "uuid": "32a78bf4-a9d4-5d4e-a824-3e79e626ad22", 00:16:08.957 "is_configured": true, 00:16:08.957 "data_offset": 0, 00:16:08.957 "data_size": 65536 00:16:08.957 } 00:16:08.957 ] 00:16:08.957 }' 00:16:08.957 09:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.957 09:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.957 09:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.957 09:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.957 09:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:08.957 09:53:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.957 09:53:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.957 [2024-12-06 09:53:33.941420] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:08.957 [2024-12-06 09:53:34.005476] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:08.957 [2024-12-06 09:53:34.005586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.957 [2024-12-06 09:53:34.005604] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:08.957 [2024-12-06 09:53:34.005616] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:08.957 09:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.957 09:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:08.957 09:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.957 09:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.957 09:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.957 09:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.957 09:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:08.957 09:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.957 09:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.957 09:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.957 09:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.957 09:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.957 09:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.957 09:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.957 09:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.957 09:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.957 09:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.957 "name": "raid_bdev1", 00:16:08.957 "uuid": "38b30e58-9b2e-4ddd-89a4-9e32878df25c", 00:16:08.957 "strip_size_kb": 64, 00:16:08.957 "state": "online", 00:16:08.957 "raid_level": "raid5f", 00:16:08.957 "superblock": false, 00:16:08.957 "num_base_bdevs": 4, 00:16:08.957 "num_base_bdevs_discovered": 3, 00:16:08.957 "num_base_bdevs_operational": 3, 00:16:08.957 "base_bdevs_list": [ 00:16:08.957 { 00:16:08.957 "name": null, 00:16:08.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.957 "is_configured": false, 00:16:08.957 "data_offset": 0, 00:16:08.957 "data_size": 65536 00:16:08.957 }, 00:16:08.957 { 00:16:08.957 "name": "BaseBdev2", 00:16:08.957 "uuid": "724529d8-ce21-5aa2-a000-58dd564c2bb0", 00:16:08.957 "is_configured": true, 00:16:08.957 "data_offset": 0, 00:16:08.957 "data_size": 65536 00:16:08.957 }, 00:16:08.957 { 00:16:08.957 "name": "BaseBdev3", 00:16:08.957 "uuid": "e27c0ad5-cd53-59d8-98ec-27f382896410", 00:16:08.957 "is_configured": true, 00:16:08.957 "data_offset": 0, 00:16:08.957 "data_size": 65536 00:16:08.957 }, 00:16:08.957 { 00:16:08.957 "name": "BaseBdev4", 00:16:08.957 "uuid": "32a78bf4-a9d4-5d4e-a824-3e79e626ad22", 00:16:08.957 "is_configured": true, 00:16:08.957 "data_offset": 0, 00:16:08.957 "data_size": 65536 00:16:08.957 } 00:16:08.957 ] 00:16:08.957 }' 00:16:08.957 09:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.957 09:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.216 09:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:09.216 09:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.216 09:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:09.216 09:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:09.216 09:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.476 09:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.476 09:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.476 09:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.476 09:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.476 09:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.476 09:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.476 "name": "raid_bdev1", 00:16:09.476 "uuid": "38b30e58-9b2e-4ddd-89a4-9e32878df25c", 00:16:09.476 "strip_size_kb": 64, 00:16:09.476 "state": "online", 00:16:09.476 "raid_level": "raid5f", 00:16:09.476 "superblock": false, 00:16:09.476 "num_base_bdevs": 4, 00:16:09.476 "num_base_bdevs_discovered": 3, 00:16:09.476 "num_base_bdevs_operational": 3, 00:16:09.476 "base_bdevs_list": [ 00:16:09.476 { 00:16:09.476 "name": null, 00:16:09.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.476 "is_configured": false, 00:16:09.476 "data_offset": 0, 00:16:09.476 "data_size": 65536 00:16:09.476 }, 00:16:09.476 { 00:16:09.476 "name": "BaseBdev2", 00:16:09.476 "uuid": "724529d8-ce21-5aa2-a000-58dd564c2bb0", 00:16:09.476 "is_configured": true, 00:16:09.476 "data_offset": 0, 00:16:09.476 "data_size": 65536 00:16:09.476 }, 00:16:09.476 { 00:16:09.476 "name": "BaseBdev3", 00:16:09.476 "uuid": "e27c0ad5-cd53-59d8-98ec-27f382896410", 00:16:09.476 "is_configured": true, 00:16:09.476 "data_offset": 0, 00:16:09.476 "data_size": 65536 00:16:09.476 }, 00:16:09.476 { 00:16:09.476 "name": "BaseBdev4", 00:16:09.476 "uuid": "32a78bf4-a9d4-5d4e-a824-3e79e626ad22", 00:16:09.476 "is_configured": true, 00:16:09.476 "data_offset": 0, 00:16:09.476 "data_size": 65536 00:16:09.476 } 00:16:09.476 ] 00:16:09.476 }' 00:16:09.476 09:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.476 09:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:09.476 09:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.476 09:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:09.476 09:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:09.476 09:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.476 09:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.476 [2024-12-06 09:53:34.634238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:09.476 [2024-12-06 09:53:34.648630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:16:09.476 09:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.476 09:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:09.476 [2024-12-06 09:53:34.657359] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:10.414 09:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.414 09:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.414 09:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.414 09:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.414 09:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.414 09:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.414 09:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.414 09:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.414 09:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.414 09:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.673 09:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.673 "name": "raid_bdev1", 00:16:10.673 "uuid": "38b30e58-9b2e-4ddd-89a4-9e32878df25c", 00:16:10.673 "strip_size_kb": 64, 00:16:10.673 "state": "online", 00:16:10.673 "raid_level": "raid5f", 00:16:10.673 "superblock": false, 00:16:10.673 "num_base_bdevs": 4, 00:16:10.673 "num_base_bdevs_discovered": 4, 00:16:10.673 "num_base_bdevs_operational": 4, 00:16:10.673 "process": { 00:16:10.673 "type": "rebuild", 00:16:10.673 "target": "spare", 00:16:10.673 "progress": { 00:16:10.673 "blocks": 19200, 00:16:10.673 "percent": 9 00:16:10.673 } 00:16:10.673 }, 00:16:10.673 "base_bdevs_list": [ 00:16:10.673 { 00:16:10.673 "name": "spare", 00:16:10.673 "uuid": "863e3c31-2cbe-504a-ab2f-b34c0f13e0d9", 00:16:10.673 "is_configured": true, 00:16:10.673 "data_offset": 0, 00:16:10.673 "data_size": 65536 00:16:10.673 }, 00:16:10.673 { 00:16:10.673 "name": "BaseBdev2", 00:16:10.673 "uuid": "724529d8-ce21-5aa2-a000-58dd564c2bb0", 00:16:10.673 "is_configured": true, 00:16:10.673 "data_offset": 0, 00:16:10.673 "data_size": 65536 00:16:10.673 }, 00:16:10.673 { 00:16:10.673 "name": "BaseBdev3", 00:16:10.673 "uuid": "e27c0ad5-cd53-59d8-98ec-27f382896410", 00:16:10.673 "is_configured": true, 00:16:10.673 "data_offset": 0, 00:16:10.673 "data_size": 65536 00:16:10.673 }, 00:16:10.673 { 00:16:10.673 "name": "BaseBdev4", 00:16:10.673 "uuid": "32a78bf4-a9d4-5d4e-a824-3e79e626ad22", 00:16:10.673 "is_configured": true, 00:16:10.673 "data_offset": 0, 00:16:10.673 "data_size": 65536 00:16:10.673 } 00:16:10.673 ] 00:16:10.673 }' 00:16:10.673 09:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.673 09:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.673 09:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.673 09:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.673 09:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:10.673 09:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:10.673 09:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:10.673 09:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=609 00:16:10.673 09:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:10.673 09:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.673 09:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.673 09:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.673 09:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.673 09:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.673 09:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.673 09:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.673 09:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.673 09:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.673 09:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.673 09:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.673 "name": "raid_bdev1", 00:16:10.673 "uuid": "38b30e58-9b2e-4ddd-89a4-9e32878df25c", 00:16:10.673 "strip_size_kb": 64, 00:16:10.673 "state": "online", 00:16:10.673 "raid_level": "raid5f", 00:16:10.673 "superblock": false, 00:16:10.673 "num_base_bdevs": 4, 00:16:10.673 "num_base_bdevs_discovered": 4, 00:16:10.673 "num_base_bdevs_operational": 4, 00:16:10.673 "process": { 00:16:10.673 "type": "rebuild", 00:16:10.673 "target": "spare", 00:16:10.673 "progress": { 00:16:10.673 "blocks": 21120, 00:16:10.673 "percent": 10 00:16:10.673 } 00:16:10.673 }, 00:16:10.673 "base_bdevs_list": [ 00:16:10.673 { 00:16:10.673 "name": "spare", 00:16:10.673 "uuid": "863e3c31-2cbe-504a-ab2f-b34c0f13e0d9", 00:16:10.673 "is_configured": true, 00:16:10.673 "data_offset": 0, 00:16:10.673 "data_size": 65536 00:16:10.673 }, 00:16:10.673 { 00:16:10.673 "name": "BaseBdev2", 00:16:10.673 "uuid": "724529d8-ce21-5aa2-a000-58dd564c2bb0", 00:16:10.673 "is_configured": true, 00:16:10.673 "data_offset": 0, 00:16:10.673 "data_size": 65536 00:16:10.673 }, 00:16:10.674 { 00:16:10.674 "name": "BaseBdev3", 00:16:10.674 "uuid": "e27c0ad5-cd53-59d8-98ec-27f382896410", 00:16:10.674 "is_configured": true, 00:16:10.674 "data_offset": 0, 00:16:10.674 "data_size": 65536 00:16:10.674 }, 00:16:10.674 { 00:16:10.674 "name": "BaseBdev4", 00:16:10.674 "uuid": "32a78bf4-a9d4-5d4e-a824-3e79e626ad22", 00:16:10.674 "is_configured": true, 00:16:10.674 "data_offset": 0, 00:16:10.674 "data_size": 65536 00:16:10.674 } 00:16:10.674 ] 00:16:10.674 }' 00:16:10.674 09:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.674 09:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.674 09:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.674 09:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.674 09:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:12.054 09:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:12.054 09:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:12.054 09:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.054 09:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:12.054 09:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:12.054 09:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.054 09:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.054 09:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.054 09:53:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.054 09:53:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.054 09:53:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.054 09:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.054 "name": "raid_bdev1", 00:16:12.054 "uuid": "38b30e58-9b2e-4ddd-89a4-9e32878df25c", 00:16:12.054 "strip_size_kb": 64, 00:16:12.054 "state": "online", 00:16:12.054 "raid_level": "raid5f", 00:16:12.054 "superblock": false, 00:16:12.054 "num_base_bdevs": 4, 00:16:12.054 "num_base_bdevs_discovered": 4, 00:16:12.054 "num_base_bdevs_operational": 4, 00:16:12.054 "process": { 00:16:12.054 "type": "rebuild", 00:16:12.054 "target": "spare", 00:16:12.054 "progress": { 00:16:12.054 "blocks": 42240, 00:16:12.054 "percent": 21 00:16:12.054 } 00:16:12.054 }, 00:16:12.054 "base_bdevs_list": [ 00:16:12.054 { 00:16:12.054 "name": "spare", 00:16:12.054 "uuid": "863e3c31-2cbe-504a-ab2f-b34c0f13e0d9", 00:16:12.054 "is_configured": true, 00:16:12.054 "data_offset": 0, 00:16:12.054 "data_size": 65536 00:16:12.054 }, 00:16:12.054 { 00:16:12.054 "name": "BaseBdev2", 00:16:12.054 "uuid": "724529d8-ce21-5aa2-a000-58dd564c2bb0", 00:16:12.054 "is_configured": true, 00:16:12.054 "data_offset": 0, 00:16:12.054 "data_size": 65536 00:16:12.054 }, 00:16:12.054 { 00:16:12.054 "name": "BaseBdev3", 00:16:12.054 "uuid": "e27c0ad5-cd53-59d8-98ec-27f382896410", 00:16:12.054 "is_configured": true, 00:16:12.054 "data_offset": 0, 00:16:12.054 "data_size": 65536 00:16:12.054 }, 00:16:12.054 { 00:16:12.054 "name": "BaseBdev4", 00:16:12.054 "uuid": "32a78bf4-a9d4-5d4e-a824-3e79e626ad22", 00:16:12.054 "is_configured": true, 00:16:12.054 "data_offset": 0, 00:16:12.054 "data_size": 65536 00:16:12.054 } 00:16:12.054 ] 00:16:12.054 }' 00:16:12.054 09:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.054 09:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:12.055 09:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.055 09:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:12.055 09:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:12.995 09:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:12.995 09:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:12.995 09:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.995 09:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:12.995 09:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:12.995 09:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.995 09:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.995 09:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.995 09:53:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.995 09:53:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.995 09:53:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.995 09:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.995 "name": "raid_bdev1", 00:16:12.995 "uuid": "38b30e58-9b2e-4ddd-89a4-9e32878df25c", 00:16:12.995 "strip_size_kb": 64, 00:16:12.995 "state": "online", 00:16:12.995 "raid_level": "raid5f", 00:16:12.995 "superblock": false, 00:16:12.995 "num_base_bdevs": 4, 00:16:12.995 "num_base_bdevs_discovered": 4, 00:16:12.995 "num_base_bdevs_operational": 4, 00:16:12.995 "process": { 00:16:12.995 "type": "rebuild", 00:16:12.995 "target": "spare", 00:16:12.995 "progress": { 00:16:12.995 "blocks": 63360, 00:16:12.995 "percent": 32 00:16:12.995 } 00:16:12.995 }, 00:16:12.995 "base_bdevs_list": [ 00:16:12.995 { 00:16:12.995 "name": "spare", 00:16:12.996 "uuid": "863e3c31-2cbe-504a-ab2f-b34c0f13e0d9", 00:16:12.996 "is_configured": true, 00:16:12.996 "data_offset": 0, 00:16:12.996 "data_size": 65536 00:16:12.996 }, 00:16:12.996 { 00:16:12.996 "name": "BaseBdev2", 00:16:12.996 "uuid": "724529d8-ce21-5aa2-a000-58dd564c2bb0", 00:16:12.996 "is_configured": true, 00:16:12.996 "data_offset": 0, 00:16:12.996 "data_size": 65536 00:16:12.996 }, 00:16:12.996 { 00:16:12.996 "name": "BaseBdev3", 00:16:12.996 "uuid": "e27c0ad5-cd53-59d8-98ec-27f382896410", 00:16:12.996 "is_configured": true, 00:16:12.996 "data_offset": 0, 00:16:12.996 "data_size": 65536 00:16:12.996 }, 00:16:12.996 { 00:16:12.996 "name": "BaseBdev4", 00:16:12.996 "uuid": "32a78bf4-a9d4-5d4e-a824-3e79e626ad22", 00:16:12.996 "is_configured": true, 00:16:12.996 "data_offset": 0, 00:16:12.996 "data_size": 65536 00:16:12.996 } 00:16:12.996 ] 00:16:12.996 }' 00:16:12.996 09:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.996 09:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:12.996 09:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.996 09:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:12.996 09:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:14.375 09:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:14.375 09:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.375 09:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.375 09:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.375 09:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.375 09:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.375 09:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.375 09:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.375 09:53:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.375 09:53:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.375 09:53:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.375 09:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.375 "name": "raid_bdev1", 00:16:14.375 "uuid": "38b30e58-9b2e-4ddd-89a4-9e32878df25c", 00:16:14.375 "strip_size_kb": 64, 00:16:14.375 "state": "online", 00:16:14.375 "raid_level": "raid5f", 00:16:14.375 "superblock": false, 00:16:14.375 "num_base_bdevs": 4, 00:16:14.375 "num_base_bdevs_discovered": 4, 00:16:14.375 "num_base_bdevs_operational": 4, 00:16:14.375 "process": { 00:16:14.375 "type": "rebuild", 00:16:14.375 "target": "spare", 00:16:14.375 "progress": { 00:16:14.375 "blocks": 86400, 00:16:14.375 "percent": 43 00:16:14.375 } 00:16:14.375 }, 00:16:14.375 "base_bdevs_list": [ 00:16:14.375 { 00:16:14.375 "name": "spare", 00:16:14.375 "uuid": "863e3c31-2cbe-504a-ab2f-b34c0f13e0d9", 00:16:14.375 "is_configured": true, 00:16:14.375 "data_offset": 0, 00:16:14.375 "data_size": 65536 00:16:14.375 }, 00:16:14.375 { 00:16:14.375 "name": "BaseBdev2", 00:16:14.375 "uuid": "724529d8-ce21-5aa2-a000-58dd564c2bb0", 00:16:14.375 "is_configured": true, 00:16:14.375 "data_offset": 0, 00:16:14.375 "data_size": 65536 00:16:14.375 }, 00:16:14.375 { 00:16:14.375 "name": "BaseBdev3", 00:16:14.375 "uuid": "e27c0ad5-cd53-59d8-98ec-27f382896410", 00:16:14.375 "is_configured": true, 00:16:14.375 "data_offset": 0, 00:16:14.375 "data_size": 65536 00:16:14.375 }, 00:16:14.375 { 00:16:14.375 "name": "BaseBdev4", 00:16:14.375 "uuid": "32a78bf4-a9d4-5d4e-a824-3e79e626ad22", 00:16:14.375 "is_configured": true, 00:16:14.375 "data_offset": 0, 00:16:14.375 "data_size": 65536 00:16:14.375 } 00:16:14.375 ] 00:16:14.375 }' 00:16:14.375 09:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.375 09:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.375 09:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.375 09:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.375 09:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:15.313 09:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:15.313 09:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.313 09:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.313 09:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.313 09:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.313 09:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.313 09:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.313 09:53:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.313 09:53:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.313 09:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.313 09:53:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.313 09:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.313 "name": "raid_bdev1", 00:16:15.313 "uuid": "38b30e58-9b2e-4ddd-89a4-9e32878df25c", 00:16:15.313 "strip_size_kb": 64, 00:16:15.313 "state": "online", 00:16:15.313 "raid_level": "raid5f", 00:16:15.313 "superblock": false, 00:16:15.313 "num_base_bdevs": 4, 00:16:15.313 "num_base_bdevs_discovered": 4, 00:16:15.313 "num_base_bdevs_operational": 4, 00:16:15.313 "process": { 00:16:15.313 "type": "rebuild", 00:16:15.313 "target": "spare", 00:16:15.313 "progress": { 00:16:15.313 "blocks": 107520, 00:16:15.313 "percent": 54 00:16:15.313 } 00:16:15.313 }, 00:16:15.313 "base_bdevs_list": [ 00:16:15.313 { 00:16:15.313 "name": "spare", 00:16:15.313 "uuid": "863e3c31-2cbe-504a-ab2f-b34c0f13e0d9", 00:16:15.313 "is_configured": true, 00:16:15.313 "data_offset": 0, 00:16:15.313 "data_size": 65536 00:16:15.313 }, 00:16:15.313 { 00:16:15.313 "name": "BaseBdev2", 00:16:15.313 "uuid": "724529d8-ce21-5aa2-a000-58dd564c2bb0", 00:16:15.313 "is_configured": true, 00:16:15.313 "data_offset": 0, 00:16:15.313 "data_size": 65536 00:16:15.313 }, 00:16:15.313 { 00:16:15.313 "name": "BaseBdev3", 00:16:15.313 "uuid": "e27c0ad5-cd53-59d8-98ec-27f382896410", 00:16:15.313 "is_configured": true, 00:16:15.313 "data_offset": 0, 00:16:15.313 "data_size": 65536 00:16:15.313 }, 00:16:15.313 { 00:16:15.313 "name": "BaseBdev4", 00:16:15.313 "uuid": "32a78bf4-a9d4-5d4e-a824-3e79e626ad22", 00:16:15.313 "is_configured": true, 00:16:15.313 "data_offset": 0, 00:16:15.313 "data_size": 65536 00:16:15.313 } 00:16:15.313 ] 00:16:15.314 }' 00:16:15.314 09:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.314 09:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:15.314 09:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.314 09:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.314 09:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:16.281 09:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:16.281 09:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.281 09:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.281 09:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.281 09:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.281 09:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.281 09:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.281 09:53:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.281 09:53:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.281 09:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.281 09:53:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.281 09:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.281 "name": "raid_bdev1", 00:16:16.281 "uuid": "38b30e58-9b2e-4ddd-89a4-9e32878df25c", 00:16:16.281 "strip_size_kb": 64, 00:16:16.281 "state": "online", 00:16:16.281 "raid_level": "raid5f", 00:16:16.281 "superblock": false, 00:16:16.281 "num_base_bdevs": 4, 00:16:16.281 "num_base_bdevs_discovered": 4, 00:16:16.281 "num_base_bdevs_operational": 4, 00:16:16.281 "process": { 00:16:16.281 "type": "rebuild", 00:16:16.281 "target": "spare", 00:16:16.281 "progress": { 00:16:16.281 "blocks": 130560, 00:16:16.281 "percent": 66 00:16:16.281 } 00:16:16.281 }, 00:16:16.281 "base_bdevs_list": [ 00:16:16.281 { 00:16:16.281 "name": "spare", 00:16:16.281 "uuid": "863e3c31-2cbe-504a-ab2f-b34c0f13e0d9", 00:16:16.281 "is_configured": true, 00:16:16.281 "data_offset": 0, 00:16:16.282 "data_size": 65536 00:16:16.282 }, 00:16:16.282 { 00:16:16.282 "name": "BaseBdev2", 00:16:16.282 "uuid": "724529d8-ce21-5aa2-a000-58dd564c2bb0", 00:16:16.282 "is_configured": true, 00:16:16.282 "data_offset": 0, 00:16:16.282 "data_size": 65536 00:16:16.282 }, 00:16:16.282 { 00:16:16.282 "name": "BaseBdev3", 00:16:16.282 "uuid": "e27c0ad5-cd53-59d8-98ec-27f382896410", 00:16:16.282 "is_configured": true, 00:16:16.282 "data_offset": 0, 00:16:16.282 "data_size": 65536 00:16:16.282 }, 00:16:16.282 { 00:16:16.282 "name": "BaseBdev4", 00:16:16.282 "uuid": "32a78bf4-a9d4-5d4e-a824-3e79e626ad22", 00:16:16.282 "is_configured": true, 00:16:16.282 "data_offset": 0, 00:16:16.282 "data_size": 65536 00:16:16.282 } 00:16:16.282 ] 00:16:16.282 }' 00:16:16.282 09:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.541 09:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:16.541 09:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.541 09:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:16.541 09:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:17.491 09:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:17.492 09:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:17.492 09:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.492 09:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:17.492 09:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:17.492 09:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.492 09:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.492 09:53:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.492 09:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.492 09:53:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.492 09:53:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.492 09:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.492 "name": "raid_bdev1", 00:16:17.492 "uuid": "38b30e58-9b2e-4ddd-89a4-9e32878df25c", 00:16:17.492 "strip_size_kb": 64, 00:16:17.492 "state": "online", 00:16:17.492 "raid_level": "raid5f", 00:16:17.492 "superblock": false, 00:16:17.492 "num_base_bdevs": 4, 00:16:17.492 "num_base_bdevs_discovered": 4, 00:16:17.492 "num_base_bdevs_operational": 4, 00:16:17.492 "process": { 00:16:17.492 "type": "rebuild", 00:16:17.492 "target": "spare", 00:16:17.492 "progress": { 00:16:17.492 "blocks": 151680, 00:16:17.492 "percent": 77 00:16:17.492 } 00:16:17.492 }, 00:16:17.492 "base_bdevs_list": [ 00:16:17.492 { 00:16:17.492 "name": "spare", 00:16:17.492 "uuid": "863e3c31-2cbe-504a-ab2f-b34c0f13e0d9", 00:16:17.492 "is_configured": true, 00:16:17.492 "data_offset": 0, 00:16:17.492 "data_size": 65536 00:16:17.492 }, 00:16:17.492 { 00:16:17.492 "name": "BaseBdev2", 00:16:17.492 "uuid": "724529d8-ce21-5aa2-a000-58dd564c2bb0", 00:16:17.492 "is_configured": true, 00:16:17.492 "data_offset": 0, 00:16:17.492 "data_size": 65536 00:16:17.492 }, 00:16:17.492 { 00:16:17.492 "name": "BaseBdev3", 00:16:17.492 "uuid": "e27c0ad5-cd53-59d8-98ec-27f382896410", 00:16:17.492 "is_configured": true, 00:16:17.492 "data_offset": 0, 00:16:17.492 "data_size": 65536 00:16:17.492 }, 00:16:17.492 { 00:16:17.492 "name": "BaseBdev4", 00:16:17.492 "uuid": "32a78bf4-a9d4-5d4e-a824-3e79e626ad22", 00:16:17.492 "is_configured": true, 00:16:17.492 "data_offset": 0, 00:16:17.492 "data_size": 65536 00:16:17.492 } 00:16:17.492 ] 00:16:17.492 }' 00:16:17.492 09:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.492 09:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:17.492 09:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.751 09:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:17.751 09:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:18.689 09:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:18.689 09:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:18.689 09:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.689 09:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:18.689 09:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:18.689 09:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.689 09:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.689 09:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.689 09:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.689 09:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.689 09:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.689 09:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.689 "name": "raid_bdev1", 00:16:18.689 "uuid": "38b30e58-9b2e-4ddd-89a4-9e32878df25c", 00:16:18.689 "strip_size_kb": 64, 00:16:18.689 "state": "online", 00:16:18.689 "raid_level": "raid5f", 00:16:18.689 "superblock": false, 00:16:18.689 "num_base_bdevs": 4, 00:16:18.689 "num_base_bdevs_discovered": 4, 00:16:18.689 "num_base_bdevs_operational": 4, 00:16:18.689 "process": { 00:16:18.689 "type": "rebuild", 00:16:18.689 "target": "spare", 00:16:18.689 "progress": { 00:16:18.689 "blocks": 174720, 00:16:18.689 "percent": 88 00:16:18.689 } 00:16:18.689 }, 00:16:18.689 "base_bdevs_list": [ 00:16:18.689 { 00:16:18.689 "name": "spare", 00:16:18.689 "uuid": "863e3c31-2cbe-504a-ab2f-b34c0f13e0d9", 00:16:18.689 "is_configured": true, 00:16:18.689 "data_offset": 0, 00:16:18.689 "data_size": 65536 00:16:18.689 }, 00:16:18.689 { 00:16:18.689 "name": "BaseBdev2", 00:16:18.689 "uuid": "724529d8-ce21-5aa2-a000-58dd564c2bb0", 00:16:18.689 "is_configured": true, 00:16:18.689 "data_offset": 0, 00:16:18.689 "data_size": 65536 00:16:18.689 }, 00:16:18.689 { 00:16:18.689 "name": "BaseBdev3", 00:16:18.689 "uuid": "e27c0ad5-cd53-59d8-98ec-27f382896410", 00:16:18.689 "is_configured": true, 00:16:18.689 "data_offset": 0, 00:16:18.689 "data_size": 65536 00:16:18.689 }, 00:16:18.689 { 00:16:18.689 "name": "BaseBdev4", 00:16:18.689 "uuid": "32a78bf4-a9d4-5d4e-a824-3e79e626ad22", 00:16:18.689 "is_configured": true, 00:16:18.689 "data_offset": 0, 00:16:18.689 "data_size": 65536 00:16:18.690 } 00:16:18.690 ] 00:16:18.690 }' 00:16:18.690 09:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.690 09:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:18.690 09:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.690 09:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:18.690 09:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:20.069 09:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:20.069 09:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:20.069 09:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.069 09:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:20.069 09:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:20.069 09:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.069 09:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.069 09:53:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.069 09:53:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.069 09:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.069 09:53:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.069 [2024-12-06 09:53:45.012216] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:20.069 [2024-12-06 09:53:45.012289] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:20.069 [2024-12-06 09:53:45.012341] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.069 09:53:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.069 "name": "raid_bdev1", 00:16:20.069 "uuid": "38b30e58-9b2e-4ddd-89a4-9e32878df25c", 00:16:20.069 "strip_size_kb": 64, 00:16:20.069 "state": "online", 00:16:20.069 "raid_level": "raid5f", 00:16:20.069 "superblock": false, 00:16:20.069 "num_base_bdevs": 4, 00:16:20.069 "num_base_bdevs_discovered": 4, 00:16:20.069 "num_base_bdevs_operational": 4, 00:16:20.069 "process": { 00:16:20.069 "type": "rebuild", 00:16:20.069 "target": "spare", 00:16:20.069 "progress": { 00:16:20.069 "blocks": 195840, 00:16:20.069 "percent": 99 00:16:20.069 } 00:16:20.069 }, 00:16:20.069 "base_bdevs_list": [ 00:16:20.069 { 00:16:20.069 "name": "spare", 00:16:20.069 "uuid": "863e3c31-2cbe-504a-ab2f-b34c0f13e0d9", 00:16:20.069 "is_configured": true, 00:16:20.069 "data_offset": 0, 00:16:20.069 "data_size": 65536 00:16:20.069 }, 00:16:20.069 { 00:16:20.069 "name": "BaseBdev2", 00:16:20.069 "uuid": "724529d8-ce21-5aa2-a000-58dd564c2bb0", 00:16:20.069 "is_configured": true, 00:16:20.069 "data_offset": 0, 00:16:20.069 "data_size": 65536 00:16:20.069 }, 00:16:20.069 { 00:16:20.069 "name": "BaseBdev3", 00:16:20.069 "uuid": "e27c0ad5-cd53-59d8-98ec-27f382896410", 00:16:20.069 "is_configured": true, 00:16:20.069 "data_offset": 0, 00:16:20.069 "data_size": 65536 00:16:20.069 }, 00:16:20.069 { 00:16:20.069 "name": "BaseBdev4", 00:16:20.069 "uuid": "32a78bf4-a9d4-5d4e-a824-3e79e626ad22", 00:16:20.069 "is_configured": true, 00:16:20.069 "data_offset": 0, 00:16:20.069 "data_size": 65536 00:16:20.069 } 00:16:20.069 ] 00:16:20.069 }' 00:16:20.069 09:53:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.069 09:53:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:20.069 09:53:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.069 09:53:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:20.069 09:53:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:21.003 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:21.003 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:21.003 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.003 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:21.003 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:21.003 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.003 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.003 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.003 09:53:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.003 09:53:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.003 09:53:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.003 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.003 "name": "raid_bdev1", 00:16:21.003 "uuid": "38b30e58-9b2e-4ddd-89a4-9e32878df25c", 00:16:21.003 "strip_size_kb": 64, 00:16:21.003 "state": "online", 00:16:21.003 "raid_level": "raid5f", 00:16:21.003 "superblock": false, 00:16:21.003 "num_base_bdevs": 4, 00:16:21.003 "num_base_bdevs_discovered": 4, 00:16:21.003 "num_base_bdevs_operational": 4, 00:16:21.003 "base_bdevs_list": [ 00:16:21.003 { 00:16:21.003 "name": "spare", 00:16:21.003 "uuid": "863e3c31-2cbe-504a-ab2f-b34c0f13e0d9", 00:16:21.003 "is_configured": true, 00:16:21.003 "data_offset": 0, 00:16:21.003 "data_size": 65536 00:16:21.003 }, 00:16:21.003 { 00:16:21.003 "name": "BaseBdev2", 00:16:21.003 "uuid": "724529d8-ce21-5aa2-a000-58dd564c2bb0", 00:16:21.003 "is_configured": true, 00:16:21.003 "data_offset": 0, 00:16:21.003 "data_size": 65536 00:16:21.003 }, 00:16:21.003 { 00:16:21.003 "name": "BaseBdev3", 00:16:21.003 "uuid": "e27c0ad5-cd53-59d8-98ec-27f382896410", 00:16:21.003 "is_configured": true, 00:16:21.003 "data_offset": 0, 00:16:21.003 "data_size": 65536 00:16:21.003 }, 00:16:21.003 { 00:16:21.003 "name": "BaseBdev4", 00:16:21.003 "uuid": "32a78bf4-a9d4-5d4e-a824-3e79e626ad22", 00:16:21.003 "is_configured": true, 00:16:21.003 "data_offset": 0, 00:16:21.003 "data_size": 65536 00:16:21.003 } 00:16:21.003 ] 00:16:21.003 }' 00:16:21.003 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.003 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:21.003 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.003 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:21.003 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:21.003 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:21.003 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.003 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:21.003 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:21.003 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.003 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.003 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.003 09:53:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.003 09:53:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.260 09:53:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.260 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.260 "name": "raid_bdev1", 00:16:21.260 "uuid": "38b30e58-9b2e-4ddd-89a4-9e32878df25c", 00:16:21.260 "strip_size_kb": 64, 00:16:21.260 "state": "online", 00:16:21.260 "raid_level": "raid5f", 00:16:21.260 "superblock": false, 00:16:21.260 "num_base_bdevs": 4, 00:16:21.260 "num_base_bdevs_discovered": 4, 00:16:21.260 "num_base_bdevs_operational": 4, 00:16:21.260 "base_bdevs_list": [ 00:16:21.260 { 00:16:21.260 "name": "spare", 00:16:21.260 "uuid": "863e3c31-2cbe-504a-ab2f-b34c0f13e0d9", 00:16:21.260 "is_configured": true, 00:16:21.260 "data_offset": 0, 00:16:21.260 "data_size": 65536 00:16:21.260 }, 00:16:21.260 { 00:16:21.260 "name": "BaseBdev2", 00:16:21.260 "uuid": "724529d8-ce21-5aa2-a000-58dd564c2bb0", 00:16:21.260 "is_configured": true, 00:16:21.260 "data_offset": 0, 00:16:21.260 "data_size": 65536 00:16:21.260 }, 00:16:21.260 { 00:16:21.260 "name": "BaseBdev3", 00:16:21.260 "uuid": "e27c0ad5-cd53-59d8-98ec-27f382896410", 00:16:21.260 "is_configured": true, 00:16:21.260 "data_offset": 0, 00:16:21.260 "data_size": 65536 00:16:21.260 }, 00:16:21.260 { 00:16:21.260 "name": "BaseBdev4", 00:16:21.260 "uuid": "32a78bf4-a9d4-5d4e-a824-3e79e626ad22", 00:16:21.260 "is_configured": true, 00:16:21.260 "data_offset": 0, 00:16:21.260 "data_size": 65536 00:16:21.260 } 00:16:21.260 ] 00:16:21.260 }' 00:16:21.260 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.260 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:21.260 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.260 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:21.260 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:21.260 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.260 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.260 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.260 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.260 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:21.260 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.260 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.260 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.260 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.260 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.260 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.260 09:53:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.260 09:53:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.260 09:53:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.260 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.260 "name": "raid_bdev1", 00:16:21.260 "uuid": "38b30e58-9b2e-4ddd-89a4-9e32878df25c", 00:16:21.260 "strip_size_kb": 64, 00:16:21.260 "state": "online", 00:16:21.260 "raid_level": "raid5f", 00:16:21.260 "superblock": false, 00:16:21.260 "num_base_bdevs": 4, 00:16:21.260 "num_base_bdevs_discovered": 4, 00:16:21.260 "num_base_bdevs_operational": 4, 00:16:21.260 "base_bdevs_list": [ 00:16:21.260 { 00:16:21.260 "name": "spare", 00:16:21.260 "uuid": "863e3c31-2cbe-504a-ab2f-b34c0f13e0d9", 00:16:21.260 "is_configured": true, 00:16:21.260 "data_offset": 0, 00:16:21.260 "data_size": 65536 00:16:21.260 }, 00:16:21.260 { 00:16:21.260 "name": "BaseBdev2", 00:16:21.260 "uuid": "724529d8-ce21-5aa2-a000-58dd564c2bb0", 00:16:21.260 "is_configured": true, 00:16:21.260 "data_offset": 0, 00:16:21.260 "data_size": 65536 00:16:21.260 }, 00:16:21.260 { 00:16:21.260 "name": "BaseBdev3", 00:16:21.260 "uuid": "e27c0ad5-cd53-59d8-98ec-27f382896410", 00:16:21.260 "is_configured": true, 00:16:21.260 "data_offset": 0, 00:16:21.260 "data_size": 65536 00:16:21.260 }, 00:16:21.260 { 00:16:21.260 "name": "BaseBdev4", 00:16:21.260 "uuid": "32a78bf4-a9d4-5d4e-a824-3e79e626ad22", 00:16:21.260 "is_configured": true, 00:16:21.260 "data_offset": 0, 00:16:21.260 "data_size": 65536 00:16:21.260 } 00:16:21.260 ] 00:16:21.260 }' 00:16:21.260 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.260 09:53:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.518 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:21.518 09:53:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.518 09:53:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.518 [2024-12-06 09:53:46.779203] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:21.518 [2024-12-06 09:53:46.779240] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:21.518 [2024-12-06 09:53:46.779345] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:21.518 [2024-12-06 09:53:46.779442] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:21.518 [2024-12-06 09:53:46.779452] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:21.518 09:53:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.518 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.518 09:53:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.518 09:53:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.518 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:21.777 09:53:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.777 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:21.777 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:21.777 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:21.777 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:21.777 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:21.777 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:21.777 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:21.777 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:21.777 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:21.777 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:21.777 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:21.777 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:21.777 09:53:46 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:21.777 /dev/nbd0 00:16:21.777 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:21.777 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:21.777 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:21.777 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:21.777 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:21.777 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:21.777 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:22.036 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:22.036 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:22.036 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:22.036 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:22.036 1+0 records in 00:16:22.036 1+0 records out 00:16:22.036 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338622 s, 12.1 MB/s 00:16:22.036 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:22.036 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:22.036 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:22.036 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:22.036 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:22.036 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:22.036 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:22.036 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:22.036 /dev/nbd1 00:16:22.296 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:22.296 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:22.296 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:22.296 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:22.296 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:22.296 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:22.296 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:22.296 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:22.296 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:22.296 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:22.296 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:22.296 1+0 records in 00:16:22.296 1+0 records out 00:16:22.296 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407941 s, 10.0 MB/s 00:16:22.296 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:22.296 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:22.296 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:22.296 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:22.296 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:22.296 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:22.296 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:22.296 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:22.296 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:22.296 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:22.296 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:22.296 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:22.296 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:22.296 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:22.296 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:22.556 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:22.556 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:22.556 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:22.556 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:22.556 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:22.556 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:22.556 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:22.556 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:22.556 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:22.556 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:22.818 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:22.818 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:22.818 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:22.818 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:22.818 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:22.818 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:22.818 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:22.818 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:22.818 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:22.818 09:53:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84457 00:16:22.818 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84457 ']' 00:16:22.818 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84457 00:16:22.818 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:22.818 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:22.818 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84457 00:16:22.818 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:22.818 killing process with pid 84457 00:16:22.818 Received shutdown signal, test time was about 60.000000 seconds 00:16:22.818 00:16:22.818 Latency(us) 00:16:22.818 [2024-12-06T09:53:48.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.818 [2024-12-06T09:53:48.091Z] =================================================================================================================== 00:16:22.818 [2024-12-06T09:53:48.091Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:22.818 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:22.818 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84457' 00:16:22.818 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84457 00:16:22.818 [2024-12-06 09:53:47.990486] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:22.818 09:53:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84457 00:16:23.388 [2024-12-06 09:53:48.507805] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:24.821 00:16:24.821 real 0m20.089s 00:16:24.821 user 0m23.895s 00:16:24.821 sys 0m2.189s 00:16:24.821 ************************************ 00:16:24.821 END TEST raid5f_rebuild_test 00:16:24.821 ************************************ 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.821 09:53:49 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:16:24.821 09:53:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:24.821 09:53:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:24.821 09:53:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:24.821 ************************************ 00:16:24.821 START TEST raid5f_rebuild_test_sb 00:16:24.821 ************************************ 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=84983 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 84983 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84983 ']' 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:24.821 09:53:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.821 [2024-12-06 09:53:49.870838] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:16:24.821 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:24.821 Zero copy mechanism will not be used. 00:16:24.821 [2024-12-06 09:53:49.871054] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84983 ] 00:16:24.821 [2024-12-06 09:53:50.046411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.080 [2024-12-06 09:53:50.181512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.337 [2024-12-06 09:53:50.413862] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:25.337 [2024-12-06 09:53:50.413963] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:25.597 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:25.597 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:25.597 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:25.597 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:25.597 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.597 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.597 BaseBdev1_malloc 00:16:25.597 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.597 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:25.597 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.597 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.597 [2024-12-06 09:53:50.745134] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:25.597 [2024-12-06 09:53:50.745210] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.597 [2024-12-06 09:53:50.745235] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:25.597 [2024-12-06 09:53:50.745248] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.597 [2024-12-06 09:53:50.747592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.597 [2024-12-06 09:53:50.747633] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:25.597 BaseBdev1 00:16:25.597 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.597 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:25.597 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:25.597 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.597 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.597 BaseBdev2_malloc 00:16:25.597 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.597 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:25.597 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.597 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.597 [2024-12-06 09:53:50.804605] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:25.597 [2024-12-06 09:53:50.804665] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.597 [2024-12-06 09:53:50.804691] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:25.597 [2024-12-06 09:53:50.804704] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.597 [2024-12-06 09:53:50.806981] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.597 [2024-12-06 09:53:50.807019] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:25.597 BaseBdev2 00:16:25.597 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.597 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:25.597 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:25.597 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.597 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.857 BaseBdev3_malloc 00:16:25.857 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.857 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:25.857 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.857 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.857 [2024-12-06 09:53:50.896891] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:25.857 [2024-12-06 09:53:50.896947] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.857 [2024-12-06 09:53:50.896972] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:25.857 [2024-12-06 09:53:50.896985] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.857 [2024-12-06 09:53:50.899295] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.857 [2024-12-06 09:53:50.899380] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:25.857 BaseBdev3 00:16:25.857 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.857 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:25.857 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:25.857 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.857 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.857 BaseBdev4_malloc 00:16:25.857 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.857 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:25.857 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.857 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.857 [2024-12-06 09:53:50.957490] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:25.857 [2024-12-06 09:53:50.957588] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.857 [2024-12-06 09:53:50.957617] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:25.857 [2024-12-06 09:53:50.957628] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.857 [2024-12-06 09:53:50.959852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.857 [2024-12-06 09:53:50.959901] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:25.857 BaseBdev4 00:16:25.857 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.857 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:25.857 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.857 09:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.857 spare_malloc 00:16:25.857 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.857 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:25.857 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.857 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.857 spare_delay 00:16:25.857 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.857 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:25.857 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.857 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.857 [2024-12-06 09:53:51.029645] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:25.857 [2024-12-06 09:53:51.029694] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.857 [2024-12-06 09:53:51.029711] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:25.857 [2024-12-06 09:53:51.029723] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.857 [2024-12-06 09:53:51.031976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.857 [2024-12-06 09:53:51.032055] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:25.857 spare 00:16:25.857 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.857 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:25.857 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.857 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.857 [2024-12-06 09:53:51.041687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:25.857 [2024-12-06 09:53:51.043690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:25.857 [2024-12-06 09:53:51.043754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:25.857 [2024-12-06 09:53:51.043802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:25.857 [2024-12-06 09:53:51.043997] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:25.857 [2024-12-06 09:53:51.044011] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:25.857 [2024-12-06 09:53:51.044268] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:25.857 [2024-12-06 09:53:51.051283] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:25.857 [2024-12-06 09:53:51.051339] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:25.857 [2024-12-06 09:53:51.051554] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.857 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.857 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:25.857 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.857 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.857 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.857 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.857 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:25.857 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.857 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.857 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.857 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.857 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.857 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.857 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.857 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.857 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.857 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.857 "name": "raid_bdev1", 00:16:25.857 "uuid": "4dd19cf9-4942-4256-9179-6c28ae107f2e", 00:16:25.857 "strip_size_kb": 64, 00:16:25.857 "state": "online", 00:16:25.857 "raid_level": "raid5f", 00:16:25.857 "superblock": true, 00:16:25.857 "num_base_bdevs": 4, 00:16:25.857 "num_base_bdevs_discovered": 4, 00:16:25.857 "num_base_bdevs_operational": 4, 00:16:25.857 "base_bdevs_list": [ 00:16:25.857 { 00:16:25.857 "name": "BaseBdev1", 00:16:25.857 "uuid": "b0828773-68dc-5de9-b68b-df68f528134d", 00:16:25.857 "is_configured": true, 00:16:25.857 "data_offset": 2048, 00:16:25.857 "data_size": 63488 00:16:25.857 }, 00:16:25.857 { 00:16:25.857 "name": "BaseBdev2", 00:16:25.857 "uuid": "868f99ea-8653-57d6-89c1-3658074ad3fd", 00:16:25.857 "is_configured": true, 00:16:25.857 "data_offset": 2048, 00:16:25.857 "data_size": 63488 00:16:25.857 }, 00:16:25.857 { 00:16:25.857 "name": "BaseBdev3", 00:16:25.857 "uuid": "5f3c76c0-8dea-5923-93ec-ba6b2eba90b4", 00:16:25.857 "is_configured": true, 00:16:25.857 "data_offset": 2048, 00:16:25.857 "data_size": 63488 00:16:25.857 }, 00:16:25.857 { 00:16:25.857 "name": "BaseBdev4", 00:16:25.857 "uuid": "4c32ccb1-6384-5127-98c9-533be5733be6", 00:16:25.857 "is_configured": true, 00:16:25.857 "data_offset": 2048, 00:16:25.857 "data_size": 63488 00:16:25.857 } 00:16:25.857 ] 00:16:25.857 }' 00:16:25.857 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.857 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.425 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:26.425 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:26.425 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.425 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.425 [2024-12-06 09:53:51.484059] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:26.425 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.425 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:16:26.425 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.426 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:26.426 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.426 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.426 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.426 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:26.426 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:26.426 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:26.426 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:26.426 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:26.426 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:26.426 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:26.426 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:26.426 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:26.426 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:26.426 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:26.426 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:26.426 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:26.426 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:26.686 [2024-12-06 09:53:51.739435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:26.686 /dev/nbd0 00:16:26.686 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:26.686 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:26.686 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:26.686 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:26.686 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:26.686 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:26.686 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:26.686 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:26.686 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:26.686 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:26.686 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:26.686 1+0 records in 00:16:26.686 1+0 records out 00:16:26.686 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310718 s, 13.2 MB/s 00:16:26.686 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:26.686 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:26.686 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:26.686 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:26.686 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:26.686 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:26.686 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:26.686 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:26.686 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:26.686 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:26.686 09:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:16:27.255 496+0 records in 00:16:27.255 496+0 records out 00:16:27.255 97517568 bytes (98 MB, 93 MiB) copied, 0.493263 s, 198 MB/s 00:16:27.255 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:27.255 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:27.255 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:27.255 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:27.255 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:27.255 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:27.255 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:27.255 [2024-12-06 09:53:52.513947] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.515 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:27.515 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:27.515 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:27.515 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:27.515 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:27.515 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:27.515 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:27.515 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:27.515 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:27.515 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.515 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.515 [2024-12-06 09:53:52.543779] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:27.515 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.515 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:27.515 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.515 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.515 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.515 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.515 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:27.515 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.515 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.515 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.515 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.515 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.515 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.515 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.515 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.515 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.515 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.515 "name": "raid_bdev1", 00:16:27.515 "uuid": "4dd19cf9-4942-4256-9179-6c28ae107f2e", 00:16:27.515 "strip_size_kb": 64, 00:16:27.515 "state": "online", 00:16:27.515 "raid_level": "raid5f", 00:16:27.515 "superblock": true, 00:16:27.515 "num_base_bdevs": 4, 00:16:27.515 "num_base_bdevs_discovered": 3, 00:16:27.515 "num_base_bdevs_operational": 3, 00:16:27.515 "base_bdevs_list": [ 00:16:27.515 { 00:16:27.515 "name": null, 00:16:27.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.515 "is_configured": false, 00:16:27.515 "data_offset": 0, 00:16:27.515 "data_size": 63488 00:16:27.515 }, 00:16:27.515 { 00:16:27.515 "name": "BaseBdev2", 00:16:27.515 "uuid": "868f99ea-8653-57d6-89c1-3658074ad3fd", 00:16:27.515 "is_configured": true, 00:16:27.515 "data_offset": 2048, 00:16:27.515 "data_size": 63488 00:16:27.515 }, 00:16:27.515 { 00:16:27.515 "name": "BaseBdev3", 00:16:27.515 "uuid": "5f3c76c0-8dea-5923-93ec-ba6b2eba90b4", 00:16:27.515 "is_configured": true, 00:16:27.515 "data_offset": 2048, 00:16:27.515 "data_size": 63488 00:16:27.515 }, 00:16:27.515 { 00:16:27.515 "name": "BaseBdev4", 00:16:27.515 "uuid": "4c32ccb1-6384-5127-98c9-533be5733be6", 00:16:27.515 "is_configured": true, 00:16:27.515 "data_offset": 2048, 00:16:27.515 "data_size": 63488 00:16:27.515 } 00:16:27.515 ] 00:16:27.515 }' 00:16:27.515 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.515 09:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.775 09:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:27.775 09:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.775 09:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.775 [2024-12-06 09:53:53.022932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:27.775 [2024-12-06 09:53:53.039153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:16:27.775 09:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.775 09:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:28.034 [2024-12-06 09:53:53.048891] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:29.008 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.008 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.008 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.008 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.008 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.008 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.008 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.008 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.008 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.008 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.008 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.008 "name": "raid_bdev1", 00:16:29.008 "uuid": "4dd19cf9-4942-4256-9179-6c28ae107f2e", 00:16:29.008 "strip_size_kb": 64, 00:16:29.008 "state": "online", 00:16:29.008 "raid_level": "raid5f", 00:16:29.008 "superblock": true, 00:16:29.008 "num_base_bdevs": 4, 00:16:29.008 "num_base_bdevs_discovered": 4, 00:16:29.008 "num_base_bdevs_operational": 4, 00:16:29.008 "process": { 00:16:29.008 "type": "rebuild", 00:16:29.008 "target": "spare", 00:16:29.008 "progress": { 00:16:29.008 "blocks": 19200, 00:16:29.008 "percent": 10 00:16:29.008 } 00:16:29.008 }, 00:16:29.008 "base_bdevs_list": [ 00:16:29.008 { 00:16:29.008 "name": "spare", 00:16:29.008 "uuid": "64b29b99-3332-5644-b285-2d9d6172de18", 00:16:29.008 "is_configured": true, 00:16:29.008 "data_offset": 2048, 00:16:29.008 "data_size": 63488 00:16:29.008 }, 00:16:29.008 { 00:16:29.008 "name": "BaseBdev2", 00:16:29.008 "uuid": "868f99ea-8653-57d6-89c1-3658074ad3fd", 00:16:29.008 "is_configured": true, 00:16:29.008 "data_offset": 2048, 00:16:29.008 "data_size": 63488 00:16:29.008 }, 00:16:29.008 { 00:16:29.008 "name": "BaseBdev3", 00:16:29.008 "uuid": "5f3c76c0-8dea-5923-93ec-ba6b2eba90b4", 00:16:29.008 "is_configured": true, 00:16:29.008 "data_offset": 2048, 00:16:29.008 "data_size": 63488 00:16:29.008 }, 00:16:29.008 { 00:16:29.008 "name": "BaseBdev4", 00:16:29.008 "uuid": "4c32ccb1-6384-5127-98c9-533be5733be6", 00:16:29.008 "is_configured": true, 00:16:29.008 "data_offset": 2048, 00:16:29.008 "data_size": 63488 00:16:29.008 } 00:16:29.008 ] 00:16:29.008 }' 00:16:29.008 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.009 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:29.009 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.009 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:29.009 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:29.009 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.009 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.009 [2024-12-06 09:53:54.184360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:29.009 [2024-12-06 09:53:54.256389] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:29.009 [2024-12-06 09:53:54.256464] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.009 [2024-12-06 09:53:54.256482] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:29.009 [2024-12-06 09:53:54.256492] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:29.269 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.269 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:29.269 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.269 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.269 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.269 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.269 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:29.269 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.269 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.269 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.269 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.269 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.269 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.269 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.269 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.269 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.269 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.269 "name": "raid_bdev1", 00:16:29.269 "uuid": "4dd19cf9-4942-4256-9179-6c28ae107f2e", 00:16:29.269 "strip_size_kb": 64, 00:16:29.269 "state": "online", 00:16:29.269 "raid_level": "raid5f", 00:16:29.269 "superblock": true, 00:16:29.269 "num_base_bdevs": 4, 00:16:29.269 "num_base_bdevs_discovered": 3, 00:16:29.269 "num_base_bdevs_operational": 3, 00:16:29.269 "base_bdevs_list": [ 00:16:29.269 { 00:16:29.269 "name": null, 00:16:29.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.269 "is_configured": false, 00:16:29.269 "data_offset": 0, 00:16:29.269 "data_size": 63488 00:16:29.269 }, 00:16:29.269 { 00:16:29.269 "name": "BaseBdev2", 00:16:29.269 "uuid": "868f99ea-8653-57d6-89c1-3658074ad3fd", 00:16:29.269 "is_configured": true, 00:16:29.269 "data_offset": 2048, 00:16:29.269 "data_size": 63488 00:16:29.269 }, 00:16:29.269 { 00:16:29.269 "name": "BaseBdev3", 00:16:29.269 "uuid": "5f3c76c0-8dea-5923-93ec-ba6b2eba90b4", 00:16:29.269 "is_configured": true, 00:16:29.269 "data_offset": 2048, 00:16:29.269 "data_size": 63488 00:16:29.269 }, 00:16:29.269 { 00:16:29.269 "name": "BaseBdev4", 00:16:29.269 "uuid": "4c32ccb1-6384-5127-98c9-533be5733be6", 00:16:29.269 "is_configured": true, 00:16:29.269 "data_offset": 2048, 00:16:29.269 "data_size": 63488 00:16:29.269 } 00:16:29.269 ] 00:16:29.269 }' 00:16:29.269 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.269 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.529 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:29.529 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.529 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:29.529 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:29.529 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.529 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.529 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.529 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.529 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.529 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.529 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.529 "name": "raid_bdev1", 00:16:29.529 "uuid": "4dd19cf9-4942-4256-9179-6c28ae107f2e", 00:16:29.529 "strip_size_kb": 64, 00:16:29.529 "state": "online", 00:16:29.529 "raid_level": "raid5f", 00:16:29.529 "superblock": true, 00:16:29.529 "num_base_bdevs": 4, 00:16:29.529 "num_base_bdevs_discovered": 3, 00:16:29.529 "num_base_bdevs_operational": 3, 00:16:29.529 "base_bdevs_list": [ 00:16:29.529 { 00:16:29.529 "name": null, 00:16:29.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.529 "is_configured": false, 00:16:29.529 "data_offset": 0, 00:16:29.529 "data_size": 63488 00:16:29.529 }, 00:16:29.529 { 00:16:29.529 "name": "BaseBdev2", 00:16:29.529 "uuid": "868f99ea-8653-57d6-89c1-3658074ad3fd", 00:16:29.529 "is_configured": true, 00:16:29.529 "data_offset": 2048, 00:16:29.529 "data_size": 63488 00:16:29.529 }, 00:16:29.529 { 00:16:29.529 "name": "BaseBdev3", 00:16:29.529 "uuid": "5f3c76c0-8dea-5923-93ec-ba6b2eba90b4", 00:16:29.529 "is_configured": true, 00:16:29.529 "data_offset": 2048, 00:16:29.529 "data_size": 63488 00:16:29.529 }, 00:16:29.529 { 00:16:29.529 "name": "BaseBdev4", 00:16:29.529 "uuid": "4c32ccb1-6384-5127-98c9-533be5733be6", 00:16:29.529 "is_configured": true, 00:16:29.529 "data_offset": 2048, 00:16:29.529 "data_size": 63488 00:16:29.529 } 00:16:29.529 ] 00:16:29.529 }' 00:16:29.529 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.789 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:29.789 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.789 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:29.789 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:29.789 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.789 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.789 [2024-12-06 09:53:54.889080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:29.789 [2024-12-06 09:53:54.902843] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:16:29.789 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.789 09:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:29.789 [2024-12-06 09:53:54.911818] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:30.732 09:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:30.732 09:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.732 09:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:30.732 09:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:30.732 09:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.732 09:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.732 09:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.732 09:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.732 09:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.732 09:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.732 09:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.732 "name": "raid_bdev1", 00:16:30.732 "uuid": "4dd19cf9-4942-4256-9179-6c28ae107f2e", 00:16:30.732 "strip_size_kb": 64, 00:16:30.732 "state": "online", 00:16:30.732 "raid_level": "raid5f", 00:16:30.732 "superblock": true, 00:16:30.732 "num_base_bdevs": 4, 00:16:30.732 "num_base_bdevs_discovered": 4, 00:16:30.732 "num_base_bdevs_operational": 4, 00:16:30.732 "process": { 00:16:30.732 "type": "rebuild", 00:16:30.732 "target": "spare", 00:16:30.732 "progress": { 00:16:30.732 "blocks": 19200, 00:16:30.732 "percent": 10 00:16:30.732 } 00:16:30.732 }, 00:16:30.732 "base_bdevs_list": [ 00:16:30.732 { 00:16:30.732 "name": "spare", 00:16:30.732 "uuid": "64b29b99-3332-5644-b285-2d9d6172de18", 00:16:30.732 "is_configured": true, 00:16:30.732 "data_offset": 2048, 00:16:30.732 "data_size": 63488 00:16:30.732 }, 00:16:30.732 { 00:16:30.732 "name": "BaseBdev2", 00:16:30.732 "uuid": "868f99ea-8653-57d6-89c1-3658074ad3fd", 00:16:30.732 "is_configured": true, 00:16:30.732 "data_offset": 2048, 00:16:30.732 "data_size": 63488 00:16:30.732 }, 00:16:30.732 { 00:16:30.732 "name": "BaseBdev3", 00:16:30.732 "uuid": "5f3c76c0-8dea-5923-93ec-ba6b2eba90b4", 00:16:30.732 "is_configured": true, 00:16:30.732 "data_offset": 2048, 00:16:30.732 "data_size": 63488 00:16:30.732 }, 00:16:30.732 { 00:16:30.732 "name": "BaseBdev4", 00:16:30.732 "uuid": "4c32ccb1-6384-5127-98c9-533be5733be6", 00:16:30.732 "is_configured": true, 00:16:30.732 "data_offset": 2048, 00:16:30.732 "data_size": 63488 00:16:30.732 } 00:16:30.732 ] 00:16:30.732 }' 00:16:30.732 09:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.732 09:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:30.732 09:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.992 09:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:30.992 09:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:30.992 09:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:30.992 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:30.992 09:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:30.992 09:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:30.992 09:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=630 00:16:30.992 09:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:30.992 09:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:30.992 09:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.992 09:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:30.992 09:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:30.992 09:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.992 09:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.992 09:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.992 09:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.992 09:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.992 09:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.992 09:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.992 "name": "raid_bdev1", 00:16:30.992 "uuid": "4dd19cf9-4942-4256-9179-6c28ae107f2e", 00:16:30.992 "strip_size_kb": 64, 00:16:30.992 "state": "online", 00:16:30.992 "raid_level": "raid5f", 00:16:30.992 "superblock": true, 00:16:30.992 "num_base_bdevs": 4, 00:16:30.992 "num_base_bdevs_discovered": 4, 00:16:30.992 "num_base_bdevs_operational": 4, 00:16:30.992 "process": { 00:16:30.992 "type": "rebuild", 00:16:30.992 "target": "spare", 00:16:30.992 "progress": { 00:16:30.992 "blocks": 21120, 00:16:30.992 "percent": 11 00:16:30.992 } 00:16:30.992 }, 00:16:30.992 "base_bdevs_list": [ 00:16:30.992 { 00:16:30.992 "name": "spare", 00:16:30.992 "uuid": "64b29b99-3332-5644-b285-2d9d6172de18", 00:16:30.992 "is_configured": true, 00:16:30.992 "data_offset": 2048, 00:16:30.992 "data_size": 63488 00:16:30.992 }, 00:16:30.992 { 00:16:30.992 "name": "BaseBdev2", 00:16:30.992 "uuid": "868f99ea-8653-57d6-89c1-3658074ad3fd", 00:16:30.992 "is_configured": true, 00:16:30.992 "data_offset": 2048, 00:16:30.992 "data_size": 63488 00:16:30.992 }, 00:16:30.992 { 00:16:30.992 "name": "BaseBdev3", 00:16:30.992 "uuid": "5f3c76c0-8dea-5923-93ec-ba6b2eba90b4", 00:16:30.992 "is_configured": true, 00:16:30.992 "data_offset": 2048, 00:16:30.992 "data_size": 63488 00:16:30.992 }, 00:16:30.992 { 00:16:30.992 "name": "BaseBdev4", 00:16:30.992 "uuid": "4c32ccb1-6384-5127-98c9-533be5733be6", 00:16:30.992 "is_configured": true, 00:16:30.992 "data_offset": 2048, 00:16:30.992 "data_size": 63488 00:16:30.992 } 00:16:30.992 ] 00:16:30.992 }' 00:16:30.992 09:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.992 09:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:30.992 09:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.992 09:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:30.992 09:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:31.932 09:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:31.932 09:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.932 09:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.932 09:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.932 09:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.932 09:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.191 09:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.191 09:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.191 09:53:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.191 09:53:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.191 09:53:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.191 09:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.191 "name": "raid_bdev1", 00:16:32.191 "uuid": "4dd19cf9-4942-4256-9179-6c28ae107f2e", 00:16:32.191 "strip_size_kb": 64, 00:16:32.191 "state": "online", 00:16:32.191 "raid_level": "raid5f", 00:16:32.191 "superblock": true, 00:16:32.191 "num_base_bdevs": 4, 00:16:32.191 "num_base_bdevs_discovered": 4, 00:16:32.191 "num_base_bdevs_operational": 4, 00:16:32.191 "process": { 00:16:32.191 "type": "rebuild", 00:16:32.191 "target": "spare", 00:16:32.191 "progress": { 00:16:32.191 "blocks": 42240, 00:16:32.191 "percent": 22 00:16:32.191 } 00:16:32.191 }, 00:16:32.191 "base_bdevs_list": [ 00:16:32.191 { 00:16:32.191 "name": "spare", 00:16:32.191 "uuid": "64b29b99-3332-5644-b285-2d9d6172de18", 00:16:32.191 "is_configured": true, 00:16:32.191 "data_offset": 2048, 00:16:32.191 "data_size": 63488 00:16:32.191 }, 00:16:32.191 { 00:16:32.191 "name": "BaseBdev2", 00:16:32.191 "uuid": "868f99ea-8653-57d6-89c1-3658074ad3fd", 00:16:32.191 "is_configured": true, 00:16:32.191 "data_offset": 2048, 00:16:32.191 "data_size": 63488 00:16:32.191 }, 00:16:32.191 { 00:16:32.191 "name": "BaseBdev3", 00:16:32.191 "uuid": "5f3c76c0-8dea-5923-93ec-ba6b2eba90b4", 00:16:32.191 "is_configured": true, 00:16:32.191 "data_offset": 2048, 00:16:32.191 "data_size": 63488 00:16:32.191 }, 00:16:32.191 { 00:16:32.191 "name": "BaseBdev4", 00:16:32.191 "uuid": "4c32ccb1-6384-5127-98c9-533be5733be6", 00:16:32.191 "is_configured": true, 00:16:32.191 "data_offset": 2048, 00:16:32.191 "data_size": 63488 00:16:32.191 } 00:16:32.191 ] 00:16:32.191 }' 00:16:32.191 09:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.191 09:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:32.191 09:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.191 09:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:32.191 09:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:33.130 09:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:33.130 09:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.130 09:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.130 09:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.130 09:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.130 09:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.130 09:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.130 09:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.130 09:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.130 09:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.130 09:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.390 09:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.390 "name": "raid_bdev1", 00:16:33.390 "uuid": "4dd19cf9-4942-4256-9179-6c28ae107f2e", 00:16:33.390 "strip_size_kb": 64, 00:16:33.390 "state": "online", 00:16:33.390 "raid_level": "raid5f", 00:16:33.390 "superblock": true, 00:16:33.390 "num_base_bdevs": 4, 00:16:33.390 "num_base_bdevs_discovered": 4, 00:16:33.390 "num_base_bdevs_operational": 4, 00:16:33.390 "process": { 00:16:33.390 "type": "rebuild", 00:16:33.390 "target": "spare", 00:16:33.390 "progress": { 00:16:33.390 "blocks": 65280, 00:16:33.390 "percent": 34 00:16:33.390 } 00:16:33.390 }, 00:16:33.390 "base_bdevs_list": [ 00:16:33.390 { 00:16:33.390 "name": "spare", 00:16:33.390 "uuid": "64b29b99-3332-5644-b285-2d9d6172de18", 00:16:33.390 "is_configured": true, 00:16:33.390 "data_offset": 2048, 00:16:33.390 "data_size": 63488 00:16:33.390 }, 00:16:33.390 { 00:16:33.390 "name": "BaseBdev2", 00:16:33.390 "uuid": "868f99ea-8653-57d6-89c1-3658074ad3fd", 00:16:33.390 "is_configured": true, 00:16:33.390 "data_offset": 2048, 00:16:33.390 "data_size": 63488 00:16:33.390 }, 00:16:33.390 { 00:16:33.390 "name": "BaseBdev3", 00:16:33.390 "uuid": "5f3c76c0-8dea-5923-93ec-ba6b2eba90b4", 00:16:33.390 "is_configured": true, 00:16:33.390 "data_offset": 2048, 00:16:33.390 "data_size": 63488 00:16:33.390 }, 00:16:33.390 { 00:16:33.390 "name": "BaseBdev4", 00:16:33.390 "uuid": "4c32ccb1-6384-5127-98c9-533be5733be6", 00:16:33.390 "is_configured": true, 00:16:33.390 "data_offset": 2048, 00:16:33.390 "data_size": 63488 00:16:33.390 } 00:16:33.390 ] 00:16:33.390 }' 00:16:33.390 09:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.390 09:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:33.390 09:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.390 09:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:33.390 09:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:34.328 09:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:34.328 09:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:34.328 09:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.328 09:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:34.328 09:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:34.328 09:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.328 09:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.329 09:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.329 09:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.329 09:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.329 09:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.329 09:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.329 "name": "raid_bdev1", 00:16:34.329 "uuid": "4dd19cf9-4942-4256-9179-6c28ae107f2e", 00:16:34.329 "strip_size_kb": 64, 00:16:34.329 "state": "online", 00:16:34.329 "raid_level": "raid5f", 00:16:34.329 "superblock": true, 00:16:34.329 "num_base_bdevs": 4, 00:16:34.329 "num_base_bdevs_discovered": 4, 00:16:34.329 "num_base_bdevs_operational": 4, 00:16:34.329 "process": { 00:16:34.329 "type": "rebuild", 00:16:34.329 "target": "spare", 00:16:34.329 "progress": { 00:16:34.329 "blocks": 86400, 00:16:34.329 "percent": 45 00:16:34.329 } 00:16:34.329 }, 00:16:34.329 "base_bdevs_list": [ 00:16:34.329 { 00:16:34.329 "name": "spare", 00:16:34.329 "uuid": "64b29b99-3332-5644-b285-2d9d6172de18", 00:16:34.329 "is_configured": true, 00:16:34.329 "data_offset": 2048, 00:16:34.329 "data_size": 63488 00:16:34.329 }, 00:16:34.329 { 00:16:34.329 "name": "BaseBdev2", 00:16:34.329 "uuid": "868f99ea-8653-57d6-89c1-3658074ad3fd", 00:16:34.329 "is_configured": true, 00:16:34.329 "data_offset": 2048, 00:16:34.329 "data_size": 63488 00:16:34.329 }, 00:16:34.329 { 00:16:34.329 "name": "BaseBdev3", 00:16:34.329 "uuid": "5f3c76c0-8dea-5923-93ec-ba6b2eba90b4", 00:16:34.329 "is_configured": true, 00:16:34.329 "data_offset": 2048, 00:16:34.329 "data_size": 63488 00:16:34.329 }, 00:16:34.329 { 00:16:34.329 "name": "BaseBdev4", 00:16:34.329 "uuid": "4c32ccb1-6384-5127-98c9-533be5733be6", 00:16:34.329 "is_configured": true, 00:16:34.329 "data_offset": 2048, 00:16:34.329 "data_size": 63488 00:16:34.329 } 00:16:34.329 ] 00:16:34.329 }' 00:16:34.329 09:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.329 09:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:34.329 09:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.588 09:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:34.588 09:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:35.528 09:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:35.528 09:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.528 09:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.528 09:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.528 09:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.528 09:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.528 09:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.528 09:54:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.528 09:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.528 09:54:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.528 09:54:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.528 09:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.528 "name": "raid_bdev1", 00:16:35.528 "uuid": "4dd19cf9-4942-4256-9179-6c28ae107f2e", 00:16:35.528 "strip_size_kb": 64, 00:16:35.528 "state": "online", 00:16:35.528 "raid_level": "raid5f", 00:16:35.528 "superblock": true, 00:16:35.528 "num_base_bdevs": 4, 00:16:35.528 "num_base_bdevs_discovered": 4, 00:16:35.528 "num_base_bdevs_operational": 4, 00:16:35.528 "process": { 00:16:35.528 "type": "rebuild", 00:16:35.528 "target": "spare", 00:16:35.528 "progress": { 00:16:35.528 "blocks": 109440, 00:16:35.528 "percent": 57 00:16:35.528 } 00:16:35.528 }, 00:16:35.528 "base_bdevs_list": [ 00:16:35.528 { 00:16:35.528 "name": "spare", 00:16:35.528 "uuid": "64b29b99-3332-5644-b285-2d9d6172de18", 00:16:35.528 "is_configured": true, 00:16:35.528 "data_offset": 2048, 00:16:35.528 "data_size": 63488 00:16:35.528 }, 00:16:35.528 { 00:16:35.528 "name": "BaseBdev2", 00:16:35.528 "uuid": "868f99ea-8653-57d6-89c1-3658074ad3fd", 00:16:35.528 "is_configured": true, 00:16:35.528 "data_offset": 2048, 00:16:35.528 "data_size": 63488 00:16:35.528 }, 00:16:35.528 { 00:16:35.528 "name": "BaseBdev3", 00:16:35.528 "uuid": "5f3c76c0-8dea-5923-93ec-ba6b2eba90b4", 00:16:35.528 "is_configured": true, 00:16:35.528 "data_offset": 2048, 00:16:35.528 "data_size": 63488 00:16:35.528 }, 00:16:35.528 { 00:16:35.528 "name": "BaseBdev4", 00:16:35.528 "uuid": "4c32ccb1-6384-5127-98c9-533be5733be6", 00:16:35.528 "is_configured": true, 00:16:35.528 "data_offset": 2048, 00:16:35.528 "data_size": 63488 00:16:35.528 } 00:16:35.528 ] 00:16:35.528 }' 00:16:35.528 09:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.528 09:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.528 09:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.528 09:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.528 09:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:36.909 09:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:36.909 09:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:36.909 09:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.909 09:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:36.909 09:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:36.909 09:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.909 09:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.909 09:54:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.909 09:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.909 09:54:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.909 09:54:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.909 09:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.909 "name": "raid_bdev1", 00:16:36.909 "uuid": "4dd19cf9-4942-4256-9179-6c28ae107f2e", 00:16:36.909 "strip_size_kb": 64, 00:16:36.909 "state": "online", 00:16:36.909 "raid_level": "raid5f", 00:16:36.909 "superblock": true, 00:16:36.909 "num_base_bdevs": 4, 00:16:36.909 "num_base_bdevs_discovered": 4, 00:16:36.909 "num_base_bdevs_operational": 4, 00:16:36.909 "process": { 00:16:36.909 "type": "rebuild", 00:16:36.909 "target": "spare", 00:16:36.909 "progress": { 00:16:36.909 "blocks": 130560, 00:16:36.909 "percent": 68 00:16:36.909 } 00:16:36.909 }, 00:16:36.909 "base_bdevs_list": [ 00:16:36.909 { 00:16:36.909 "name": "spare", 00:16:36.909 "uuid": "64b29b99-3332-5644-b285-2d9d6172de18", 00:16:36.909 "is_configured": true, 00:16:36.909 "data_offset": 2048, 00:16:36.909 "data_size": 63488 00:16:36.909 }, 00:16:36.909 { 00:16:36.909 "name": "BaseBdev2", 00:16:36.909 "uuid": "868f99ea-8653-57d6-89c1-3658074ad3fd", 00:16:36.909 "is_configured": true, 00:16:36.909 "data_offset": 2048, 00:16:36.909 "data_size": 63488 00:16:36.909 }, 00:16:36.909 { 00:16:36.909 "name": "BaseBdev3", 00:16:36.909 "uuid": "5f3c76c0-8dea-5923-93ec-ba6b2eba90b4", 00:16:36.909 "is_configured": true, 00:16:36.909 "data_offset": 2048, 00:16:36.909 "data_size": 63488 00:16:36.909 }, 00:16:36.909 { 00:16:36.909 "name": "BaseBdev4", 00:16:36.909 "uuid": "4c32ccb1-6384-5127-98c9-533be5733be6", 00:16:36.909 "is_configured": true, 00:16:36.909 "data_offset": 2048, 00:16:36.909 "data_size": 63488 00:16:36.909 } 00:16:36.909 ] 00:16:36.909 }' 00:16:36.909 09:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.909 09:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:36.909 09:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.909 09:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:36.909 09:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:37.847 09:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:37.847 09:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:37.847 09:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.847 09:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:37.847 09:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:37.847 09:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.847 09:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.847 09:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.847 09:54:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.847 09:54:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.847 09:54:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.847 09:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.847 "name": "raid_bdev1", 00:16:37.847 "uuid": "4dd19cf9-4942-4256-9179-6c28ae107f2e", 00:16:37.847 "strip_size_kb": 64, 00:16:37.847 "state": "online", 00:16:37.847 "raid_level": "raid5f", 00:16:37.847 "superblock": true, 00:16:37.847 "num_base_bdevs": 4, 00:16:37.847 "num_base_bdevs_discovered": 4, 00:16:37.847 "num_base_bdevs_operational": 4, 00:16:37.847 "process": { 00:16:37.847 "type": "rebuild", 00:16:37.847 "target": "spare", 00:16:37.847 "progress": { 00:16:37.847 "blocks": 153600, 00:16:37.847 "percent": 80 00:16:37.847 } 00:16:37.847 }, 00:16:37.847 "base_bdevs_list": [ 00:16:37.847 { 00:16:37.847 "name": "spare", 00:16:37.847 "uuid": "64b29b99-3332-5644-b285-2d9d6172de18", 00:16:37.847 "is_configured": true, 00:16:37.847 "data_offset": 2048, 00:16:37.847 "data_size": 63488 00:16:37.847 }, 00:16:37.847 { 00:16:37.847 "name": "BaseBdev2", 00:16:37.847 "uuid": "868f99ea-8653-57d6-89c1-3658074ad3fd", 00:16:37.847 "is_configured": true, 00:16:37.847 "data_offset": 2048, 00:16:37.847 "data_size": 63488 00:16:37.847 }, 00:16:37.847 { 00:16:37.847 "name": "BaseBdev3", 00:16:37.847 "uuid": "5f3c76c0-8dea-5923-93ec-ba6b2eba90b4", 00:16:37.847 "is_configured": true, 00:16:37.847 "data_offset": 2048, 00:16:37.847 "data_size": 63488 00:16:37.847 }, 00:16:37.847 { 00:16:37.847 "name": "BaseBdev4", 00:16:37.847 "uuid": "4c32ccb1-6384-5127-98c9-533be5733be6", 00:16:37.847 "is_configured": true, 00:16:37.847 "data_offset": 2048, 00:16:37.847 "data_size": 63488 00:16:37.847 } 00:16:37.847 ] 00:16:37.847 }' 00:16:37.847 09:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.847 09:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.847 09:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.847 09:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.847 09:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:39.228 09:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:39.228 09:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:39.228 09:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.228 09:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:39.228 09:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:39.228 09:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.228 09:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.228 09:54:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.228 09:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.228 09:54:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.228 09:54:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.228 09:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.228 "name": "raid_bdev1", 00:16:39.228 "uuid": "4dd19cf9-4942-4256-9179-6c28ae107f2e", 00:16:39.229 "strip_size_kb": 64, 00:16:39.229 "state": "online", 00:16:39.229 "raid_level": "raid5f", 00:16:39.229 "superblock": true, 00:16:39.229 "num_base_bdevs": 4, 00:16:39.229 "num_base_bdevs_discovered": 4, 00:16:39.229 "num_base_bdevs_operational": 4, 00:16:39.229 "process": { 00:16:39.229 "type": "rebuild", 00:16:39.229 "target": "spare", 00:16:39.229 "progress": { 00:16:39.229 "blocks": 174720, 00:16:39.229 "percent": 91 00:16:39.229 } 00:16:39.229 }, 00:16:39.229 "base_bdevs_list": [ 00:16:39.229 { 00:16:39.229 "name": "spare", 00:16:39.229 "uuid": "64b29b99-3332-5644-b285-2d9d6172de18", 00:16:39.229 "is_configured": true, 00:16:39.229 "data_offset": 2048, 00:16:39.229 "data_size": 63488 00:16:39.229 }, 00:16:39.229 { 00:16:39.229 "name": "BaseBdev2", 00:16:39.229 "uuid": "868f99ea-8653-57d6-89c1-3658074ad3fd", 00:16:39.229 "is_configured": true, 00:16:39.229 "data_offset": 2048, 00:16:39.229 "data_size": 63488 00:16:39.229 }, 00:16:39.229 { 00:16:39.229 "name": "BaseBdev3", 00:16:39.229 "uuid": "5f3c76c0-8dea-5923-93ec-ba6b2eba90b4", 00:16:39.229 "is_configured": true, 00:16:39.229 "data_offset": 2048, 00:16:39.229 "data_size": 63488 00:16:39.229 }, 00:16:39.229 { 00:16:39.229 "name": "BaseBdev4", 00:16:39.229 "uuid": "4c32ccb1-6384-5127-98c9-533be5733be6", 00:16:39.229 "is_configured": true, 00:16:39.229 "data_offset": 2048, 00:16:39.229 "data_size": 63488 00:16:39.229 } 00:16:39.229 ] 00:16:39.229 }' 00:16:39.229 09:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.229 09:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:39.229 09:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.229 09:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:39.229 09:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:39.799 [2024-12-06 09:54:04.966150] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:39.799 [2024-12-06 09:54:04.966226] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:39.799 [2024-12-06 09:54:04.966355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.057 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:40.057 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:40.057 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.057 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:40.057 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:40.057 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.057 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.057 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.057 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.057 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.057 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.057 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.057 "name": "raid_bdev1", 00:16:40.057 "uuid": "4dd19cf9-4942-4256-9179-6c28ae107f2e", 00:16:40.057 "strip_size_kb": 64, 00:16:40.057 "state": "online", 00:16:40.057 "raid_level": "raid5f", 00:16:40.057 "superblock": true, 00:16:40.057 "num_base_bdevs": 4, 00:16:40.057 "num_base_bdevs_discovered": 4, 00:16:40.057 "num_base_bdevs_operational": 4, 00:16:40.057 "base_bdevs_list": [ 00:16:40.057 { 00:16:40.057 "name": "spare", 00:16:40.057 "uuid": "64b29b99-3332-5644-b285-2d9d6172de18", 00:16:40.057 "is_configured": true, 00:16:40.057 "data_offset": 2048, 00:16:40.057 "data_size": 63488 00:16:40.057 }, 00:16:40.057 { 00:16:40.057 "name": "BaseBdev2", 00:16:40.057 "uuid": "868f99ea-8653-57d6-89c1-3658074ad3fd", 00:16:40.057 "is_configured": true, 00:16:40.057 "data_offset": 2048, 00:16:40.057 "data_size": 63488 00:16:40.057 }, 00:16:40.057 { 00:16:40.057 "name": "BaseBdev3", 00:16:40.057 "uuid": "5f3c76c0-8dea-5923-93ec-ba6b2eba90b4", 00:16:40.057 "is_configured": true, 00:16:40.057 "data_offset": 2048, 00:16:40.057 "data_size": 63488 00:16:40.057 }, 00:16:40.057 { 00:16:40.057 "name": "BaseBdev4", 00:16:40.057 "uuid": "4c32ccb1-6384-5127-98c9-533be5733be6", 00:16:40.057 "is_configured": true, 00:16:40.057 "data_offset": 2048, 00:16:40.057 "data_size": 63488 00:16:40.057 } 00:16:40.057 ] 00:16:40.057 }' 00:16:40.057 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.315 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:40.315 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.315 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:40.315 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:40.315 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:40.316 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.316 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:40.316 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:40.316 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.316 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.316 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.316 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.316 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.316 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.316 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.316 "name": "raid_bdev1", 00:16:40.316 "uuid": "4dd19cf9-4942-4256-9179-6c28ae107f2e", 00:16:40.316 "strip_size_kb": 64, 00:16:40.316 "state": "online", 00:16:40.316 "raid_level": "raid5f", 00:16:40.316 "superblock": true, 00:16:40.316 "num_base_bdevs": 4, 00:16:40.316 "num_base_bdevs_discovered": 4, 00:16:40.316 "num_base_bdevs_operational": 4, 00:16:40.316 "base_bdevs_list": [ 00:16:40.316 { 00:16:40.316 "name": "spare", 00:16:40.316 "uuid": "64b29b99-3332-5644-b285-2d9d6172de18", 00:16:40.316 "is_configured": true, 00:16:40.316 "data_offset": 2048, 00:16:40.316 "data_size": 63488 00:16:40.316 }, 00:16:40.316 { 00:16:40.316 "name": "BaseBdev2", 00:16:40.316 "uuid": "868f99ea-8653-57d6-89c1-3658074ad3fd", 00:16:40.316 "is_configured": true, 00:16:40.316 "data_offset": 2048, 00:16:40.316 "data_size": 63488 00:16:40.316 }, 00:16:40.316 { 00:16:40.316 "name": "BaseBdev3", 00:16:40.316 "uuid": "5f3c76c0-8dea-5923-93ec-ba6b2eba90b4", 00:16:40.316 "is_configured": true, 00:16:40.316 "data_offset": 2048, 00:16:40.316 "data_size": 63488 00:16:40.316 }, 00:16:40.316 { 00:16:40.316 "name": "BaseBdev4", 00:16:40.316 "uuid": "4c32ccb1-6384-5127-98c9-533be5733be6", 00:16:40.316 "is_configured": true, 00:16:40.316 "data_offset": 2048, 00:16:40.316 "data_size": 63488 00:16:40.316 } 00:16:40.316 ] 00:16:40.316 }' 00:16:40.316 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.316 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:40.316 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.316 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:40.316 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:40.316 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.316 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.316 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.316 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.316 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:40.316 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.316 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.316 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.316 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.316 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.316 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.316 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.316 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.316 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.316 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.316 "name": "raid_bdev1", 00:16:40.316 "uuid": "4dd19cf9-4942-4256-9179-6c28ae107f2e", 00:16:40.316 "strip_size_kb": 64, 00:16:40.316 "state": "online", 00:16:40.316 "raid_level": "raid5f", 00:16:40.316 "superblock": true, 00:16:40.316 "num_base_bdevs": 4, 00:16:40.316 "num_base_bdevs_discovered": 4, 00:16:40.316 "num_base_bdevs_operational": 4, 00:16:40.316 "base_bdevs_list": [ 00:16:40.316 { 00:16:40.316 "name": "spare", 00:16:40.316 "uuid": "64b29b99-3332-5644-b285-2d9d6172de18", 00:16:40.316 "is_configured": true, 00:16:40.316 "data_offset": 2048, 00:16:40.316 "data_size": 63488 00:16:40.316 }, 00:16:40.316 { 00:16:40.316 "name": "BaseBdev2", 00:16:40.316 "uuid": "868f99ea-8653-57d6-89c1-3658074ad3fd", 00:16:40.316 "is_configured": true, 00:16:40.316 "data_offset": 2048, 00:16:40.316 "data_size": 63488 00:16:40.316 }, 00:16:40.316 { 00:16:40.316 "name": "BaseBdev3", 00:16:40.316 "uuid": "5f3c76c0-8dea-5923-93ec-ba6b2eba90b4", 00:16:40.316 "is_configured": true, 00:16:40.316 "data_offset": 2048, 00:16:40.316 "data_size": 63488 00:16:40.316 }, 00:16:40.316 { 00:16:40.316 "name": "BaseBdev4", 00:16:40.316 "uuid": "4c32ccb1-6384-5127-98c9-533be5733be6", 00:16:40.316 "is_configured": true, 00:16:40.316 "data_offset": 2048, 00:16:40.316 "data_size": 63488 00:16:40.316 } 00:16:40.316 ] 00:16:40.316 }' 00:16:40.316 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.316 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.883 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:40.883 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.883 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.883 [2024-12-06 09:54:05.989127] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:40.883 [2024-12-06 09:54:05.989220] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:40.883 [2024-12-06 09:54:05.989339] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:40.883 [2024-12-06 09:54:05.989477] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:40.883 [2024-12-06 09:54:05.989540] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:40.883 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.883 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.883 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:40.883 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.883 09:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.883 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.883 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:40.883 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:40.883 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:40.883 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:40.883 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:40.883 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:40.883 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:40.883 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:40.883 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:40.883 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:40.883 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:40.883 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:40.883 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:41.141 /dev/nbd0 00:16:41.141 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:41.141 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:41.141 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:41.141 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:41.141 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:41.141 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:41.141 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:41.141 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:41.141 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:41.141 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:41.141 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:41.141 1+0 records in 00:16:41.141 1+0 records out 00:16:41.141 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00053293 s, 7.7 MB/s 00:16:41.141 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:41.141 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:41.141 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:41.141 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:41.141 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:41.141 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:41.141 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:41.141 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:41.402 /dev/nbd1 00:16:41.402 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:41.402 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:41.402 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:41.402 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:41.402 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:41.402 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:41.402 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:41.402 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:41.402 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:41.402 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:41.402 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:41.402 1+0 records in 00:16:41.402 1+0 records out 00:16:41.402 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000518623 s, 7.9 MB/s 00:16:41.402 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:41.402 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:41.402 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:41.402 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:41.402 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:41.402 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:41.403 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:41.403 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:41.676 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:41.676 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:41.676 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:41.676 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:41.676 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:41.676 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:41.676 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:41.676 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:41.676 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:41.676 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:41.676 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:41.676 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:41.676 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:41.676 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:41.676 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:41.676 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:41.676 09:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:41.952 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:41.952 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:41.952 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:41.952 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:41.952 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:41.952 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:41.952 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:41.952 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:41.952 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:41.952 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:41.952 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.952 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.952 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.952 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:41.952 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.952 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.952 [2024-12-06 09:54:07.164922] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:41.952 [2024-12-06 09:54:07.164983] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.952 [2024-12-06 09:54:07.165009] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:41.952 [2024-12-06 09:54:07.165019] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.952 [2024-12-06 09:54:07.167648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.952 [2024-12-06 09:54:07.167690] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:41.952 [2024-12-06 09:54:07.167780] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:41.952 [2024-12-06 09:54:07.167844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:41.952 [2024-12-06 09:54:07.168000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:41.952 [2024-12-06 09:54:07.168097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:41.952 [2024-12-06 09:54:07.168195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:41.952 spare 00:16:41.952 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.952 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:41.952 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.952 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.211 [2024-12-06 09:54:07.268111] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:42.211 [2024-12-06 09:54:07.268149] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:42.211 [2024-12-06 09:54:07.268461] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:16:42.211 [2024-12-06 09:54:07.275646] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:42.211 [2024-12-06 09:54:07.275668] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:42.211 [2024-12-06 09:54:07.275846] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.211 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.211 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:42.211 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.211 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.211 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.211 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.211 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:42.211 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.211 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.211 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.211 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.211 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.211 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.211 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.211 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.211 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.211 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.211 "name": "raid_bdev1", 00:16:42.211 "uuid": "4dd19cf9-4942-4256-9179-6c28ae107f2e", 00:16:42.211 "strip_size_kb": 64, 00:16:42.211 "state": "online", 00:16:42.211 "raid_level": "raid5f", 00:16:42.211 "superblock": true, 00:16:42.211 "num_base_bdevs": 4, 00:16:42.211 "num_base_bdevs_discovered": 4, 00:16:42.211 "num_base_bdevs_operational": 4, 00:16:42.211 "base_bdevs_list": [ 00:16:42.211 { 00:16:42.211 "name": "spare", 00:16:42.211 "uuid": "64b29b99-3332-5644-b285-2d9d6172de18", 00:16:42.211 "is_configured": true, 00:16:42.211 "data_offset": 2048, 00:16:42.211 "data_size": 63488 00:16:42.211 }, 00:16:42.211 { 00:16:42.211 "name": "BaseBdev2", 00:16:42.211 "uuid": "868f99ea-8653-57d6-89c1-3658074ad3fd", 00:16:42.211 "is_configured": true, 00:16:42.211 "data_offset": 2048, 00:16:42.211 "data_size": 63488 00:16:42.211 }, 00:16:42.211 { 00:16:42.211 "name": "BaseBdev3", 00:16:42.211 "uuid": "5f3c76c0-8dea-5923-93ec-ba6b2eba90b4", 00:16:42.211 "is_configured": true, 00:16:42.211 "data_offset": 2048, 00:16:42.211 "data_size": 63488 00:16:42.211 }, 00:16:42.211 { 00:16:42.211 "name": "BaseBdev4", 00:16:42.211 "uuid": "4c32ccb1-6384-5127-98c9-533be5733be6", 00:16:42.211 "is_configured": true, 00:16:42.211 "data_offset": 2048, 00:16:42.211 "data_size": 63488 00:16:42.211 } 00:16:42.211 ] 00:16:42.211 }' 00:16:42.211 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.211 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.469 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:42.469 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.469 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:42.469 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:42.469 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.469 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.469 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.469 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.469 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.728 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.728 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.728 "name": "raid_bdev1", 00:16:42.728 "uuid": "4dd19cf9-4942-4256-9179-6c28ae107f2e", 00:16:42.728 "strip_size_kb": 64, 00:16:42.728 "state": "online", 00:16:42.728 "raid_level": "raid5f", 00:16:42.728 "superblock": true, 00:16:42.728 "num_base_bdevs": 4, 00:16:42.728 "num_base_bdevs_discovered": 4, 00:16:42.728 "num_base_bdevs_operational": 4, 00:16:42.728 "base_bdevs_list": [ 00:16:42.728 { 00:16:42.728 "name": "spare", 00:16:42.728 "uuid": "64b29b99-3332-5644-b285-2d9d6172de18", 00:16:42.728 "is_configured": true, 00:16:42.728 "data_offset": 2048, 00:16:42.728 "data_size": 63488 00:16:42.728 }, 00:16:42.728 { 00:16:42.728 "name": "BaseBdev2", 00:16:42.728 "uuid": "868f99ea-8653-57d6-89c1-3658074ad3fd", 00:16:42.728 "is_configured": true, 00:16:42.728 "data_offset": 2048, 00:16:42.728 "data_size": 63488 00:16:42.728 }, 00:16:42.728 { 00:16:42.728 "name": "BaseBdev3", 00:16:42.728 "uuid": "5f3c76c0-8dea-5923-93ec-ba6b2eba90b4", 00:16:42.728 "is_configured": true, 00:16:42.728 "data_offset": 2048, 00:16:42.728 "data_size": 63488 00:16:42.728 }, 00:16:42.728 { 00:16:42.728 "name": "BaseBdev4", 00:16:42.728 "uuid": "4c32ccb1-6384-5127-98c9-533be5733be6", 00:16:42.728 "is_configured": true, 00:16:42.728 "data_offset": 2048, 00:16:42.728 "data_size": 63488 00:16:42.728 } 00:16:42.728 ] 00:16:42.728 }' 00:16:42.728 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.728 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:42.728 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.728 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:42.728 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.728 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:42.728 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.728 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.728 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.728 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:42.728 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:42.728 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.728 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.728 [2024-12-06 09:54:07.924040] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:42.728 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.728 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:42.728 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.728 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.728 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.728 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.728 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:42.728 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.728 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.728 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.728 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.728 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.729 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.729 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.729 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.729 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.729 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.729 "name": "raid_bdev1", 00:16:42.729 "uuid": "4dd19cf9-4942-4256-9179-6c28ae107f2e", 00:16:42.729 "strip_size_kb": 64, 00:16:42.729 "state": "online", 00:16:42.729 "raid_level": "raid5f", 00:16:42.729 "superblock": true, 00:16:42.729 "num_base_bdevs": 4, 00:16:42.729 "num_base_bdevs_discovered": 3, 00:16:42.729 "num_base_bdevs_operational": 3, 00:16:42.729 "base_bdevs_list": [ 00:16:42.729 { 00:16:42.729 "name": null, 00:16:42.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.729 "is_configured": false, 00:16:42.729 "data_offset": 0, 00:16:42.729 "data_size": 63488 00:16:42.729 }, 00:16:42.729 { 00:16:42.729 "name": "BaseBdev2", 00:16:42.729 "uuid": "868f99ea-8653-57d6-89c1-3658074ad3fd", 00:16:42.729 "is_configured": true, 00:16:42.729 "data_offset": 2048, 00:16:42.729 "data_size": 63488 00:16:42.729 }, 00:16:42.729 { 00:16:42.729 "name": "BaseBdev3", 00:16:42.729 "uuid": "5f3c76c0-8dea-5923-93ec-ba6b2eba90b4", 00:16:42.729 "is_configured": true, 00:16:42.729 "data_offset": 2048, 00:16:42.729 "data_size": 63488 00:16:42.729 }, 00:16:42.729 { 00:16:42.729 "name": "BaseBdev4", 00:16:42.729 "uuid": "4c32ccb1-6384-5127-98c9-533be5733be6", 00:16:42.729 "is_configured": true, 00:16:42.729 "data_offset": 2048, 00:16:42.729 "data_size": 63488 00:16:42.729 } 00:16:42.729 ] 00:16:42.729 }' 00:16:42.729 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.729 09:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.298 09:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:43.298 09:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.298 09:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.298 [2024-12-06 09:54:08.391240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:43.298 [2024-12-06 09:54:08.391476] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:43.298 [2024-12-06 09:54:08.391551] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:43.298 [2024-12-06 09:54:08.391607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:43.298 [2024-12-06 09:54:08.405449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:16:43.298 09:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.298 09:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:43.298 [2024-12-06 09:54:08.414578] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:44.232 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:44.232 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.232 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:44.232 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:44.232 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.232 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.232 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.232 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.232 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.232 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.232 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.233 "name": "raid_bdev1", 00:16:44.233 "uuid": "4dd19cf9-4942-4256-9179-6c28ae107f2e", 00:16:44.233 "strip_size_kb": 64, 00:16:44.233 "state": "online", 00:16:44.233 "raid_level": "raid5f", 00:16:44.233 "superblock": true, 00:16:44.233 "num_base_bdevs": 4, 00:16:44.233 "num_base_bdevs_discovered": 4, 00:16:44.233 "num_base_bdevs_operational": 4, 00:16:44.233 "process": { 00:16:44.233 "type": "rebuild", 00:16:44.233 "target": "spare", 00:16:44.233 "progress": { 00:16:44.233 "blocks": 19200, 00:16:44.233 "percent": 10 00:16:44.233 } 00:16:44.233 }, 00:16:44.233 "base_bdevs_list": [ 00:16:44.233 { 00:16:44.233 "name": "spare", 00:16:44.233 "uuid": "64b29b99-3332-5644-b285-2d9d6172de18", 00:16:44.233 "is_configured": true, 00:16:44.233 "data_offset": 2048, 00:16:44.233 "data_size": 63488 00:16:44.233 }, 00:16:44.233 { 00:16:44.233 "name": "BaseBdev2", 00:16:44.233 "uuid": "868f99ea-8653-57d6-89c1-3658074ad3fd", 00:16:44.233 "is_configured": true, 00:16:44.233 "data_offset": 2048, 00:16:44.233 "data_size": 63488 00:16:44.233 }, 00:16:44.233 { 00:16:44.233 "name": "BaseBdev3", 00:16:44.233 "uuid": "5f3c76c0-8dea-5923-93ec-ba6b2eba90b4", 00:16:44.233 "is_configured": true, 00:16:44.233 "data_offset": 2048, 00:16:44.233 "data_size": 63488 00:16:44.233 }, 00:16:44.233 { 00:16:44.233 "name": "BaseBdev4", 00:16:44.233 "uuid": "4c32ccb1-6384-5127-98c9-533be5733be6", 00:16:44.233 "is_configured": true, 00:16:44.233 "data_offset": 2048, 00:16:44.233 "data_size": 63488 00:16:44.233 } 00:16:44.233 ] 00:16:44.233 }' 00:16:44.233 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.490 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:44.490 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.490 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:44.490 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:44.490 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.491 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.491 [2024-12-06 09:54:09.561245] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:44.491 [2024-12-06 09:54:09.622301] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:44.491 [2024-12-06 09:54:09.622371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.491 [2024-12-06 09:54:09.622387] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:44.491 [2024-12-06 09:54:09.622397] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:44.491 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.491 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:44.491 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.491 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.491 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.491 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.491 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:44.491 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.491 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.491 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.491 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.491 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.491 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.491 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.491 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.491 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.491 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.491 "name": "raid_bdev1", 00:16:44.491 "uuid": "4dd19cf9-4942-4256-9179-6c28ae107f2e", 00:16:44.491 "strip_size_kb": 64, 00:16:44.491 "state": "online", 00:16:44.491 "raid_level": "raid5f", 00:16:44.491 "superblock": true, 00:16:44.491 "num_base_bdevs": 4, 00:16:44.491 "num_base_bdevs_discovered": 3, 00:16:44.491 "num_base_bdevs_operational": 3, 00:16:44.491 "base_bdevs_list": [ 00:16:44.491 { 00:16:44.491 "name": null, 00:16:44.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.491 "is_configured": false, 00:16:44.491 "data_offset": 0, 00:16:44.491 "data_size": 63488 00:16:44.491 }, 00:16:44.491 { 00:16:44.491 "name": "BaseBdev2", 00:16:44.491 "uuid": "868f99ea-8653-57d6-89c1-3658074ad3fd", 00:16:44.491 "is_configured": true, 00:16:44.491 "data_offset": 2048, 00:16:44.491 "data_size": 63488 00:16:44.491 }, 00:16:44.491 { 00:16:44.491 "name": "BaseBdev3", 00:16:44.491 "uuid": "5f3c76c0-8dea-5923-93ec-ba6b2eba90b4", 00:16:44.491 "is_configured": true, 00:16:44.491 "data_offset": 2048, 00:16:44.491 "data_size": 63488 00:16:44.491 }, 00:16:44.491 { 00:16:44.491 "name": "BaseBdev4", 00:16:44.491 "uuid": "4c32ccb1-6384-5127-98c9-533be5733be6", 00:16:44.491 "is_configured": true, 00:16:44.491 "data_offset": 2048, 00:16:44.491 "data_size": 63488 00:16:44.491 } 00:16:44.491 ] 00:16:44.491 }' 00:16:44.491 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.491 09:54:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.058 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:45.058 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.058 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.058 [2024-12-06 09:54:10.087943] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:45.058 [2024-12-06 09:54:10.088088] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.058 [2024-12-06 09:54:10.088134] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:16:45.058 [2024-12-06 09:54:10.088193] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.058 [2024-12-06 09:54:10.088752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.058 [2024-12-06 09:54:10.088817] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:45.058 [2024-12-06 09:54:10.088939] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:45.058 [2024-12-06 09:54:10.088982] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:45.058 [2024-12-06 09:54:10.089022] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:45.058 [2024-12-06 09:54:10.089111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:45.058 [2024-12-06 09:54:10.102951] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:16:45.058 spare 00:16:45.058 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.058 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:45.058 [2024-12-06 09:54:10.111863] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:45.993 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:45.993 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.993 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:45.993 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:45.993 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.993 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.993 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.993 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.994 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.994 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.994 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.994 "name": "raid_bdev1", 00:16:45.994 "uuid": "4dd19cf9-4942-4256-9179-6c28ae107f2e", 00:16:45.994 "strip_size_kb": 64, 00:16:45.994 "state": "online", 00:16:45.994 "raid_level": "raid5f", 00:16:45.994 "superblock": true, 00:16:45.994 "num_base_bdevs": 4, 00:16:45.994 "num_base_bdevs_discovered": 4, 00:16:45.994 "num_base_bdevs_operational": 4, 00:16:45.994 "process": { 00:16:45.994 "type": "rebuild", 00:16:45.994 "target": "spare", 00:16:45.994 "progress": { 00:16:45.994 "blocks": 19200, 00:16:45.994 "percent": 10 00:16:45.994 } 00:16:45.994 }, 00:16:45.994 "base_bdevs_list": [ 00:16:45.994 { 00:16:45.994 "name": "spare", 00:16:45.994 "uuid": "64b29b99-3332-5644-b285-2d9d6172de18", 00:16:45.994 "is_configured": true, 00:16:45.994 "data_offset": 2048, 00:16:45.994 "data_size": 63488 00:16:45.994 }, 00:16:45.994 { 00:16:45.994 "name": "BaseBdev2", 00:16:45.994 "uuid": "868f99ea-8653-57d6-89c1-3658074ad3fd", 00:16:45.994 "is_configured": true, 00:16:45.994 "data_offset": 2048, 00:16:45.994 "data_size": 63488 00:16:45.994 }, 00:16:45.994 { 00:16:45.994 "name": "BaseBdev3", 00:16:45.994 "uuid": "5f3c76c0-8dea-5923-93ec-ba6b2eba90b4", 00:16:45.994 "is_configured": true, 00:16:45.994 "data_offset": 2048, 00:16:45.994 "data_size": 63488 00:16:45.994 }, 00:16:45.994 { 00:16:45.994 "name": "BaseBdev4", 00:16:45.994 "uuid": "4c32ccb1-6384-5127-98c9-533be5733be6", 00:16:45.994 "is_configured": true, 00:16:45.994 "data_offset": 2048, 00:16:45.994 "data_size": 63488 00:16:45.994 } 00:16:45.994 ] 00:16:45.994 }' 00:16:45.994 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.994 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:45.994 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.994 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:45.994 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:45.994 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.994 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.994 [2024-12-06 09:54:11.251216] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:46.253 [2024-12-06 09:54:11.319214] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:46.253 [2024-12-06 09:54:11.319325] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.253 [2024-12-06 09:54:11.319348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:46.253 [2024-12-06 09:54:11.319356] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:46.253 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.253 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:46.253 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.253 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.253 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.253 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.253 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:46.253 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.253 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.253 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.253 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.253 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.253 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.253 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.253 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.253 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.253 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.253 "name": "raid_bdev1", 00:16:46.253 "uuid": "4dd19cf9-4942-4256-9179-6c28ae107f2e", 00:16:46.253 "strip_size_kb": 64, 00:16:46.253 "state": "online", 00:16:46.253 "raid_level": "raid5f", 00:16:46.253 "superblock": true, 00:16:46.253 "num_base_bdevs": 4, 00:16:46.253 "num_base_bdevs_discovered": 3, 00:16:46.253 "num_base_bdevs_operational": 3, 00:16:46.253 "base_bdevs_list": [ 00:16:46.253 { 00:16:46.253 "name": null, 00:16:46.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.253 "is_configured": false, 00:16:46.253 "data_offset": 0, 00:16:46.253 "data_size": 63488 00:16:46.253 }, 00:16:46.253 { 00:16:46.253 "name": "BaseBdev2", 00:16:46.253 "uuid": "868f99ea-8653-57d6-89c1-3658074ad3fd", 00:16:46.253 "is_configured": true, 00:16:46.253 "data_offset": 2048, 00:16:46.253 "data_size": 63488 00:16:46.253 }, 00:16:46.253 { 00:16:46.253 "name": "BaseBdev3", 00:16:46.253 "uuid": "5f3c76c0-8dea-5923-93ec-ba6b2eba90b4", 00:16:46.253 "is_configured": true, 00:16:46.253 "data_offset": 2048, 00:16:46.253 "data_size": 63488 00:16:46.253 }, 00:16:46.253 { 00:16:46.253 "name": "BaseBdev4", 00:16:46.253 "uuid": "4c32ccb1-6384-5127-98c9-533be5733be6", 00:16:46.253 "is_configured": true, 00:16:46.253 "data_offset": 2048, 00:16:46.253 "data_size": 63488 00:16:46.253 } 00:16:46.253 ] 00:16:46.253 }' 00:16:46.253 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.253 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.820 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:46.820 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.820 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:46.820 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:46.820 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.820 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.820 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.820 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.820 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.820 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.820 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.820 "name": "raid_bdev1", 00:16:46.820 "uuid": "4dd19cf9-4942-4256-9179-6c28ae107f2e", 00:16:46.820 "strip_size_kb": 64, 00:16:46.820 "state": "online", 00:16:46.820 "raid_level": "raid5f", 00:16:46.820 "superblock": true, 00:16:46.820 "num_base_bdevs": 4, 00:16:46.820 "num_base_bdevs_discovered": 3, 00:16:46.820 "num_base_bdevs_operational": 3, 00:16:46.820 "base_bdevs_list": [ 00:16:46.820 { 00:16:46.820 "name": null, 00:16:46.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.820 "is_configured": false, 00:16:46.820 "data_offset": 0, 00:16:46.820 "data_size": 63488 00:16:46.820 }, 00:16:46.820 { 00:16:46.820 "name": "BaseBdev2", 00:16:46.820 "uuid": "868f99ea-8653-57d6-89c1-3658074ad3fd", 00:16:46.820 "is_configured": true, 00:16:46.820 "data_offset": 2048, 00:16:46.820 "data_size": 63488 00:16:46.820 }, 00:16:46.820 { 00:16:46.820 "name": "BaseBdev3", 00:16:46.820 "uuid": "5f3c76c0-8dea-5923-93ec-ba6b2eba90b4", 00:16:46.820 "is_configured": true, 00:16:46.820 "data_offset": 2048, 00:16:46.820 "data_size": 63488 00:16:46.820 }, 00:16:46.820 { 00:16:46.820 "name": "BaseBdev4", 00:16:46.820 "uuid": "4c32ccb1-6384-5127-98c9-533be5733be6", 00:16:46.820 "is_configured": true, 00:16:46.820 "data_offset": 2048, 00:16:46.820 "data_size": 63488 00:16:46.820 } 00:16:46.820 ] 00:16:46.820 }' 00:16:46.820 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.820 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:46.820 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.820 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:46.820 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:46.820 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.820 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.820 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.820 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:46.820 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.820 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.820 [2024-12-06 09:54:11.926963] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:46.820 [2024-12-06 09:54:11.927018] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.820 [2024-12-06 09:54:11.927043] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:16:46.820 [2024-12-06 09:54:11.927052] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.820 [2024-12-06 09:54:11.927556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.820 [2024-12-06 09:54:11.927588] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:46.820 [2024-12-06 09:54:11.927673] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:46.820 [2024-12-06 09:54:11.927687] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:46.820 [2024-12-06 09:54:11.927700] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:46.820 [2024-12-06 09:54:11.927710] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:46.820 BaseBdev1 00:16:46.820 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.820 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:47.756 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:47.756 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.756 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.756 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.756 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.756 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:47.756 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.756 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.756 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.756 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.756 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.756 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.756 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.756 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.756 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.756 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.756 "name": "raid_bdev1", 00:16:47.756 "uuid": "4dd19cf9-4942-4256-9179-6c28ae107f2e", 00:16:47.756 "strip_size_kb": 64, 00:16:47.756 "state": "online", 00:16:47.756 "raid_level": "raid5f", 00:16:47.756 "superblock": true, 00:16:47.756 "num_base_bdevs": 4, 00:16:47.756 "num_base_bdevs_discovered": 3, 00:16:47.756 "num_base_bdevs_operational": 3, 00:16:47.756 "base_bdevs_list": [ 00:16:47.756 { 00:16:47.756 "name": null, 00:16:47.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.756 "is_configured": false, 00:16:47.756 "data_offset": 0, 00:16:47.757 "data_size": 63488 00:16:47.757 }, 00:16:47.757 { 00:16:47.757 "name": "BaseBdev2", 00:16:47.757 "uuid": "868f99ea-8653-57d6-89c1-3658074ad3fd", 00:16:47.757 "is_configured": true, 00:16:47.757 "data_offset": 2048, 00:16:47.757 "data_size": 63488 00:16:47.757 }, 00:16:47.757 { 00:16:47.757 "name": "BaseBdev3", 00:16:47.757 "uuid": "5f3c76c0-8dea-5923-93ec-ba6b2eba90b4", 00:16:47.757 "is_configured": true, 00:16:47.757 "data_offset": 2048, 00:16:47.757 "data_size": 63488 00:16:47.757 }, 00:16:47.757 { 00:16:47.757 "name": "BaseBdev4", 00:16:47.757 "uuid": "4c32ccb1-6384-5127-98c9-533be5733be6", 00:16:47.757 "is_configured": true, 00:16:47.757 "data_offset": 2048, 00:16:47.757 "data_size": 63488 00:16:47.757 } 00:16:47.757 ] 00:16:47.757 }' 00:16:47.757 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.757 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.324 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:48.324 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.324 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:48.324 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:48.324 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.324 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.324 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.324 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.324 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.324 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.324 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.324 "name": "raid_bdev1", 00:16:48.324 "uuid": "4dd19cf9-4942-4256-9179-6c28ae107f2e", 00:16:48.324 "strip_size_kb": 64, 00:16:48.324 "state": "online", 00:16:48.324 "raid_level": "raid5f", 00:16:48.324 "superblock": true, 00:16:48.324 "num_base_bdevs": 4, 00:16:48.324 "num_base_bdevs_discovered": 3, 00:16:48.324 "num_base_bdevs_operational": 3, 00:16:48.324 "base_bdevs_list": [ 00:16:48.324 { 00:16:48.324 "name": null, 00:16:48.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.324 "is_configured": false, 00:16:48.324 "data_offset": 0, 00:16:48.324 "data_size": 63488 00:16:48.324 }, 00:16:48.324 { 00:16:48.324 "name": "BaseBdev2", 00:16:48.324 "uuid": "868f99ea-8653-57d6-89c1-3658074ad3fd", 00:16:48.324 "is_configured": true, 00:16:48.324 "data_offset": 2048, 00:16:48.324 "data_size": 63488 00:16:48.324 }, 00:16:48.324 { 00:16:48.324 "name": "BaseBdev3", 00:16:48.324 "uuid": "5f3c76c0-8dea-5923-93ec-ba6b2eba90b4", 00:16:48.324 "is_configured": true, 00:16:48.324 "data_offset": 2048, 00:16:48.324 "data_size": 63488 00:16:48.324 }, 00:16:48.324 { 00:16:48.324 "name": "BaseBdev4", 00:16:48.324 "uuid": "4c32ccb1-6384-5127-98c9-533be5733be6", 00:16:48.324 "is_configured": true, 00:16:48.324 "data_offset": 2048, 00:16:48.324 "data_size": 63488 00:16:48.324 } 00:16:48.324 ] 00:16:48.324 }' 00:16:48.324 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.324 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:48.324 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.324 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:48.324 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:48.324 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:48.324 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:48.324 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:48.324 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:48.324 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:48.324 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:48.324 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:48.324 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.324 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.324 [2024-12-06 09:54:13.500325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:48.324 [2024-12-06 09:54:13.500527] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:48.324 [2024-12-06 09:54:13.500543] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:48.324 request: 00:16:48.324 { 00:16:48.324 "base_bdev": "BaseBdev1", 00:16:48.324 "raid_bdev": "raid_bdev1", 00:16:48.324 "method": "bdev_raid_add_base_bdev", 00:16:48.324 "req_id": 1 00:16:48.324 } 00:16:48.324 Got JSON-RPC error response 00:16:48.324 response: 00:16:48.324 { 00:16:48.324 "code": -22, 00:16:48.324 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:48.324 } 00:16:48.324 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:48.324 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:48.324 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:48.324 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:48.324 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:48.324 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:49.262 09:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:49.262 09:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.262 09:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.262 09:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.262 09:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.262 09:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:49.262 09:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.262 09:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.262 09:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.262 09:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.262 09:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.262 09:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.262 09:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.262 09:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.521 09:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.521 09:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.521 "name": "raid_bdev1", 00:16:49.521 "uuid": "4dd19cf9-4942-4256-9179-6c28ae107f2e", 00:16:49.521 "strip_size_kb": 64, 00:16:49.521 "state": "online", 00:16:49.521 "raid_level": "raid5f", 00:16:49.521 "superblock": true, 00:16:49.521 "num_base_bdevs": 4, 00:16:49.521 "num_base_bdevs_discovered": 3, 00:16:49.521 "num_base_bdevs_operational": 3, 00:16:49.521 "base_bdevs_list": [ 00:16:49.521 { 00:16:49.521 "name": null, 00:16:49.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.521 "is_configured": false, 00:16:49.521 "data_offset": 0, 00:16:49.521 "data_size": 63488 00:16:49.521 }, 00:16:49.521 { 00:16:49.521 "name": "BaseBdev2", 00:16:49.521 "uuid": "868f99ea-8653-57d6-89c1-3658074ad3fd", 00:16:49.521 "is_configured": true, 00:16:49.521 "data_offset": 2048, 00:16:49.521 "data_size": 63488 00:16:49.521 }, 00:16:49.521 { 00:16:49.521 "name": "BaseBdev3", 00:16:49.521 "uuid": "5f3c76c0-8dea-5923-93ec-ba6b2eba90b4", 00:16:49.521 "is_configured": true, 00:16:49.521 "data_offset": 2048, 00:16:49.521 "data_size": 63488 00:16:49.521 }, 00:16:49.521 { 00:16:49.521 "name": "BaseBdev4", 00:16:49.521 "uuid": "4c32ccb1-6384-5127-98c9-533be5733be6", 00:16:49.521 "is_configured": true, 00:16:49.521 "data_offset": 2048, 00:16:49.521 "data_size": 63488 00:16:49.521 } 00:16:49.521 ] 00:16:49.521 }' 00:16:49.521 09:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.521 09:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.781 09:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:49.781 09:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:49.781 09:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:49.781 09:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:49.781 09:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.781 09:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.781 09:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.781 09:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.781 09:54:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.781 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.781 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.781 "name": "raid_bdev1", 00:16:49.781 "uuid": "4dd19cf9-4942-4256-9179-6c28ae107f2e", 00:16:49.781 "strip_size_kb": 64, 00:16:49.781 "state": "online", 00:16:49.781 "raid_level": "raid5f", 00:16:49.781 "superblock": true, 00:16:49.781 "num_base_bdevs": 4, 00:16:49.781 "num_base_bdevs_discovered": 3, 00:16:49.781 "num_base_bdevs_operational": 3, 00:16:49.781 "base_bdevs_list": [ 00:16:49.781 { 00:16:49.781 "name": null, 00:16:49.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.782 "is_configured": false, 00:16:49.782 "data_offset": 0, 00:16:49.782 "data_size": 63488 00:16:49.782 }, 00:16:49.782 { 00:16:49.782 "name": "BaseBdev2", 00:16:49.782 "uuid": "868f99ea-8653-57d6-89c1-3658074ad3fd", 00:16:49.782 "is_configured": true, 00:16:49.782 "data_offset": 2048, 00:16:49.782 "data_size": 63488 00:16:49.782 }, 00:16:49.782 { 00:16:49.782 "name": "BaseBdev3", 00:16:49.782 "uuid": "5f3c76c0-8dea-5923-93ec-ba6b2eba90b4", 00:16:49.782 "is_configured": true, 00:16:49.782 "data_offset": 2048, 00:16:49.782 "data_size": 63488 00:16:49.782 }, 00:16:49.782 { 00:16:49.782 "name": "BaseBdev4", 00:16:49.782 "uuid": "4c32ccb1-6384-5127-98c9-533be5733be6", 00:16:49.782 "is_configured": true, 00:16:49.782 "data_offset": 2048, 00:16:49.782 "data_size": 63488 00:16:49.782 } 00:16:49.782 ] 00:16:49.782 }' 00:16:49.782 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.040 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:50.040 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.040 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:50.040 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 84983 00:16:50.040 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84983 ']' 00:16:50.040 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 84983 00:16:50.040 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:50.040 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:50.040 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84983 00:16:50.040 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:50.040 killing process with pid 84983 00:16:50.040 Received shutdown signal, test time was about 60.000000 seconds 00:16:50.040 00:16:50.040 Latency(us) 00:16:50.040 [2024-12-06T09:54:15.313Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.040 [2024-12-06T09:54:15.313Z] =================================================================================================================== 00:16:50.040 [2024-12-06T09:54:15.313Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:50.040 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:50.040 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84983' 00:16:50.040 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 84983 00:16:50.040 [2024-12-06 09:54:15.182339] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:50.040 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 84983 00:16:50.040 [2024-12-06 09:54:15.182494] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:50.040 [2024-12-06 09:54:15.182581] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:50.040 [2024-12-06 09:54:15.182594] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:50.608 [2024-12-06 09:54:15.705319] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:51.987 09:54:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:51.987 00:16:51.987 real 0m27.138s 00:16:51.987 user 0m33.853s 00:16:51.987 sys 0m3.050s 00:16:51.987 09:54:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:51.987 09:54:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.987 ************************************ 00:16:51.987 END TEST raid5f_rebuild_test_sb 00:16:51.987 ************************************ 00:16:51.987 09:54:16 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:16:51.987 09:54:16 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:16:51.987 09:54:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:51.987 09:54:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:51.987 09:54:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:51.987 ************************************ 00:16:51.987 START TEST raid_state_function_test_sb_4k 00:16:51.987 ************************************ 00:16:51.987 09:54:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:16:51.987 09:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:51.987 09:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:51.987 09:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:51.987 09:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:51.987 09:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:51.987 09:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:51.987 09:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:51.987 09:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:51.987 09:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:51.987 09:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:51.987 09:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:51.987 09:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:51.987 09:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:51.987 09:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:51.987 09:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:51.987 09:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:51.987 09:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:51.987 09:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:51.987 09:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:51.987 09:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:51.987 09:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:51.987 09:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:51.987 09:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85797 00:16:51.987 09:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:51.987 09:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85797' 00:16:51.987 Process raid pid: 85797 00:16:51.987 09:54:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85797 00:16:51.987 09:54:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 85797 ']' 00:16:51.987 09:54:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.987 09:54:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:51.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.987 09:54:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.987 09:54:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:51.987 09:54:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.987 [2024-12-06 09:54:17.088265] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:16:51.987 [2024-12-06 09:54:17.088390] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.247 [2024-12-06 09:54:17.268523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.247 [2024-12-06 09:54:17.404917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.506 [2024-12-06 09:54:17.643049] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:52.506 [2024-12-06 09:54:17.643095] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:52.765 09:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:52.765 09:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:16:52.765 09:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:52.765 09:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.765 09:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.765 [2024-12-06 09:54:17.907047] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:52.765 [2024-12-06 09:54:17.907117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:52.765 [2024-12-06 09:54:17.907127] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:52.765 [2024-12-06 09:54:17.907137] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:52.765 09:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.765 09:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:52.765 09:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.765 09:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:52.765 09:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.765 09:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.765 09:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:52.765 09:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.765 09:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.765 09:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.765 09:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.765 09:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.765 09:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.765 09:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.765 09:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.765 09:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.765 09:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.765 "name": "Existed_Raid", 00:16:52.765 "uuid": "616af509-d61d-46e4-a2c1-52c0517c1058", 00:16:52.765 "strip_size_kb": 0, 00:16:52.765 "state": "configuring", 00:16:52.765 "raid_level": "raid1", 00:16:52.765 "superblock": true, 00:16:52.765 "num_base_bdevs": 2, 00:16:52.765 "num_base_bdevs_discovered": 0, 00:16:52.765 "num_base_bdevs_operational": 2, 00:16:52.765 "base_bdevs_list": [ 00:16:52.765 { 00:16:52.765 "name": "BaseBdev1", 00:16:52.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.765 "is_configured": false, 00:16:52.765 "data_offset": 0, 00:16:52.765 "data_size": 0 00:16:52.765 }, 00:16:52.765 { 00:16:52.765 "name": "BaseBdev2", 00:16:52.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.765 "is_configured": false, 00:16:52.765 "data_offset": 0, 00:16:52.765 "data_size": 0 00:16:52.765 } 00:16:52.765 ] 00:16:52.765 }' 00:16:52.765 09:54:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.765 09:54:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.333 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.334 [2024-12-06 09:54:18.358223] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:53.334 [2024-12-06 09:54:18.358339] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.334 [2024-12-06 09:54:18.366208] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:53.334 [2024-12-06 09:54:18.366286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:53.334 [2024-12-06 09:54:18.366315] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:53.334 [2024-12-06 09:54:18.366341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.334 [2024-12-06 09:54:18.415407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:53.334 BaseBdev1 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.334 [ 00:16:53.334 { 00:16:53.334 "name": "BaseBdev1", 00:16:53.334 "aliases": [ 00:16:53.334 "7476b58f-855f-4a0e-b538-a3a454db99ee" 00:16:53.334 ], 00:16:53.334 "product_name": "Malloc disk", 00:16:53.334 "block_size": 4096, 00:16:53.334 "num_blocks": 8192, 00:16:53.334 "uuid": "7476b58f-855f-4a0e-b538-a3a454db99ee", 00:16:53.334 "assigned_rate_limits": { 00:16:53.334 "rw_ios_per_sec": 0, 00:16:53.334 "rw_mbytes_per_sec": 0, 00:16:53.334 "r_mbytes_per_sec": 0, 00:16:53.334 "w_mbytes_per_sec": 0 00:16:53.334 }, 00:16:53.334 "claimed": true, 00:16:53.334 "claim_type": "exclusive_write", 00:16:53.334 "zoned": false, 00:16:53.334 "supported_io_types": { 00:16:53.334 "read": true, 00:16:53.334 "write": true, 00:16:53.334 "unmap": true, 00:16:53.334 "flush": true, 00:16:53.334 "reset": true, 00:16:53.334 "nvme_admin": false, 00:16:53.334 "nvme_io": false, 00:16:53.334 "nvme_io_md": false, 00:16:53.334 "write_zeroes": true, 00:16:53.334 "zcopy": true, 00:16:53.334 "get_zone_info": false, 00:16:53.334 "zone_management": false, 00:16:53.334 "zone_append": false, 00:16:53.334 "compare": false, 00:16:53.334 "compare_and_write": false, 00:16:53.334 "abort": true, 00:16:53.334 "seek_hole": false, 00:16:53.334 "seek_data": false, 00:16:53.334 "copy": true, 00:16:53.334 "nvme_iov_md": false 00:16:53.334 }, 00:16:53.334 "memory_domains": [ 00:16:53.334 { 00:16:53.334 "dma_device_id": "system", 00:16:53.334 "dma_device_type": 1 00:16:53.334 }, 00:16:53.334 { 00:16:53.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.334 "dma_device_type": 2 00:16:53.334 } 00:16:53.334 ], 00:16:53.334 "driver_specific": {} 00:16:53.334 } 00:16:53.334 ] 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.334 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.334 "name": "Existed_Raid", 00:16:53.334 "uuid": "9c20a912-b7e8-46b8-8075-b684672e54b1", 00:16:53.334 "strip_size_kb": 0, 00:16:53.334 "state": "configuring", 00:16:53.334 "raid_level": "raid1", 00:16:53.334 "superblock": true, 00:16:53.334 "num_base_bdevs": 2, 00:16:53.334 "num_base_bdevs_discovered": 1, 00:16:53.334 "num_base_bdevs_operational": 2, 00:16:53.334 "base_bdevs_list": [ 00:16:53.334 { 00:16:53.334 "name": "BaseBdev1", 00:16:53.334 "uuid": "7476b58f-855f-4a0e-b538-a3a454db99ee", 00:16:53.334 "is_configured": true, 00:16:53.334 "data_offset": 256, 00:16:53.334 "data_size": 7936 00:16:53.334 }, 00:16:53.335 { 00:16:53.335 "name": "BaseBdev2", 00:16:53.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.335 "is_configured": false, 00:16:53.335 "data_offset": 0, 00:16:53.335 "data_size": 0 00:16:53.335 } 00:16:53.335 ] 00:16:53.335 }' 00:16:53.335 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.335 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.904 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:53.904 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.904 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.904 [2024-12-06 09:54:18.926582] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:53.904 [2024-12-06 09:54:18.926672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:53.904 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.904 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:53.904 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.904 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.904 [2024-12-06 09:54:18.938604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:53.904 [2024-12-06 09:54:18.940582] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:53.904 [2024-12-06 09:54:18.940628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:53.904 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.904 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:53.904 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:53.905 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:53.905 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.905 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:53.905 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:53.905 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:53.905 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:53.905 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.905 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.905 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.905 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.905 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.905 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.905 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.905 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.905 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.905 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.905 "name": "Existed_Raid", 00:16:53.905 "uuid": "97379e11-fa91-4385-bbcd-bf459bbdcd85", 00:16:53.905 "strip_size_kb": 0, 00:16:53.905 "state": "configuring", 00:16:53.905 "raid_level": "raid1", 00:16:53.905 "superblock": true, 00:16:53.905 "num_base_bdevs": 2, 00:16:53.905 "num_base_bdevs_discovered": 1, 00:16:53.905 "num_base_bdevs_operational": 2, 00:16:53.905 "base_bdevs_list": [ 00:16:53.905 { 00:16:53.905 "name": "BaseBdev1", 00:16:53.905 "uuid": "7476b58f-855f-4a0e-b538-a3a454db99ee", 00:16:53.905 "is_configured": true, 00:16:53.905 "data_offset": 256, 00:16:53.905 "data_size": 7936 00:16:53.905 }, 00:16:53.905 { 00:16:53.905 "name": "BaseBdev2", 00:16:53.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.905 "is_configured": false, 00:16:53.905 "data_offset": 0, 00:16:53.905 "data_size": 0 00:16:53.905 } 00:16:53.905 ] 00:16:53.905 }' 00:16:53.905 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.905 09:54:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.165 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:16:54.165 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.165 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.165 [2024-12-06 09:54:19.375424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:54.165 [2024-12-06 09:54:19.375756] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:54.165 [2024-12-06 09:54:19.375808] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:54.165 [2024-12-06 09:54:19.376113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:54.165 [2024-12-06 09:54:19.376338] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:54.165 [2024-12-06 09:54:19.376388] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:54.165 [2024-12-06 09:54:19.376565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.165 BaseBdev2 00:16:54.165 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.165 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:54.165 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:54.165 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:54.165 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:16:54.165 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:54.165 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:54.165 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:54.165 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.165 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.165 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.165 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:54.165 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.165 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.165 [ 00:16:54.165 { 00:16:54.165 "name": "BaseBdev2", 00:16:54.165 "aliases": [ 00:16:54.165 "ead7ec1f-9bc5-433a-a8f2-559f0f68ceff" 00:16:54.165 ], 00:16:54.165 "product_name": "Malloc disk", 00:16:54.166 "block_size": 4096, 00:16:54.166 "num_blocks": 8192, 00:16:54.166 "uuid": "ead7ec1f-9bc5-433a-a8f2-559f0f68ceff", 00:16:54.166 "assigned_rate_limits": { 00:16:54.166 "rw_ios_per_sec": 0, 00:16:54.166 "rw_mbytes_per_sec": 0, 00:16:54.166 "r_mbytes_per_sec": 0, 00:16:54.166 "w_mbytes_per_sec": 0 00:16:54.166 }, 00:16:54.166 "claimed": true, 00:16:54.166 "claim_type": "exclusive_write", 00:16:54.166 "zoned": false, 00:16:54.166 "supported_io_types": { 00:16:54.166 "read": true, 00:16:54.166 "write": true, 00:16:54.166 "unmap": true, 00:16:54.166 "flush": true, 00:16:54.166 "reset": true, 00:16:54.166 "nvme_admin": false, 00:16:54.166 "nvme_io": false, 00:16:54.166 "nvme_io_md": false, 00:16:54.166 "write_zeroes": true, 00:16:54.166 "zcopy": true, 00:16:54.166 "get_zone_info": false, 00:16:54.166 "zone_management": false, 00:16:54.166 "zone_append": false, 00:16:54.166 "compare": false, 00:16:54.166 "compare_and_write": false, 00:16:54.166 "abort": true, 00:16:54.166 "seek_hole": false, 00:16:54.166 "seek_data": false, 00:16:54.166 "copy": true, 00:16:54.166 "nvme_iov_md": false 00:16:54.166 }, 00:16:54.166 "memory_domains": [ 00:16:54.166 { 00:16:54.166 "dma_device_id": "system", 00:16:54.166 "dma_device_type": 1 00:16:54.166 }, 00:16:54.166 { 00:16:54.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.166 "dma_device_type": 2 00:16:54.166 } 00:16:54.166 ], 00:16:54.166 "driver_specific": {} 00:16:54.166 } 00:16:54.166 ] 00:16:54.166 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.166 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:16:54.166 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:54.166 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:54.166 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:54.166 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.166 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.166 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.166 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.166 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:54.166 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.166 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.166 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.166 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.166 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.166 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.166 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.166 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.441 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.441 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.441 "name": "Existed_Raid", 00:16:54.441 "uuid": "97379e11-fa91-4385-bbcd-bf459bbdcd85", 00:16:54.441 "strip_size_kb": 0, 00:16:54.441 "state": "online", 00:16:54.441 "raid_level": "raid1", 00:16:54.441 "superblock": true, 00:16:54.441 "num_base_bdevs": 2, 00:16:54.441 "num_base_bdevs_discovered": 2, 00:16:54.441 "num_base_bdevs_operational": 2, 00:16:54.441 "base_bdevs_list": [ 00:16:54.441 { 00:16:54.441 "name": "BaseBdev1", 00:16:54.441 "uuid": "7476b58f-855f-4a0e-b538-a3a454db99ee", 00:16:54.441 "is_configured": true, 00:16:54.441 "data_offset": 256, 00:16:54.441 "data_size": 7936 00:16:54.441 }, 00:16:54.441 { 00:16:54.441 "name": "BaseBdev2", 00:16:54.441 "uuid": "ead7ec1f-9bc5-433a-a8f2-559f0f68ceff", 00:16:54.441 "is_configured": true, 00:16:54.441 "data_offset": 256, 00:16:54.441 "data_size": 7936 00:16:54.441 } 00:16:54.441 ] 00:16:54.441 }' 00:16:54.441 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.441 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.718 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:54.718 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:54.718 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:54.718 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:54.718 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:54.718 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:54.718 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:54.718 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:54.718 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.718 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.718 [2024-12-06 09:54:19.910881] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:54.718 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.718 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:54.718 "name": "Existed_Raid", 00:16:54.718 "aliases": [ 00:16:54.718 "97379e11-fa91-4385-bbcd-bf459bbdcd85" 00:16:54.718 ], 00:16:54.718 "product_name": "Raid Volume", 00:16:54.718 "block_size": 4096, 00:16:54.718 "num_blocks": 7936, 00:16:54.718 "uuid": "97379e11-fa91-4385-bbcd-bf459bbdcd85", 00:16:54.718 "assigned_rate_limits": { 00:16:54.718 "rw_ios_per_sec": 0, 00:16:54.718 "rw_mbytes_per_sec": 0, 00:16:54.718 "r_mbytes_per_sec": 0, 00:16:54.718 "w_mbytes_per_sec": 0 00:16:54.718 }, 00:16:54.719 "claimed": false, 00:16:54.719 "zoned": false, 00:16:54.719 "supported_io_types": { 00:16:54.719 "read": true, 00:16:54.719 "write": true, 00:16:54.719 "unmap": false, 00:16:54.719 "flush": false, 00:16:54.719 "reset": true, 00:16:54.719 "nvme_admin": false, 00:16:54.719 "nvme_io": false, 00:16:54.719 "nvme_io_md": false, 00:16:54.719 "write_zeroes": true, 00:16:54.719 "zcopy": false, 00:16:54.719 "get_zone_info": false, 00:16:54.719 "zone_management": false, 00:16:54.719 "zone_append": false, 00:16:54.719 "compare": false, 00:16:54.719 "compare_and_write": false, 00:16:54.719 "abort": false, 00:16:54.719 "seek_hole": false, 00:16:54.719 "seek_data": false, 00:16:54.719 "copy": false, 00:16:54.719 "nvme_iov_md": false 00:16:54.719 }, 00:16:54.719 "memory_domains": [ 00:16:54.719 { 00:16:54.719 "dma_device_id": "system", 00:16:54.719 "dma_device_type": 1 00:16:54.719 }, 00:16:54.719 { 00:16:54.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.719 "dma_device_type": 2 00:16:54.719 }, 00:16:54.719 { 00:16:54.719 "dma_device_id": "system", 00:16:54.719 "dma_device_type": 1 00:16:54.719 }, 00:16:54.719 { 00:16:54.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.719 "dma_device_type": 2 00:16:54.719 } 00:16:54.719 ], 00:16:54.719 "driver_specific": { 00:16:54.719 "raid": { 00:16:54.719 "uuid": "97379e11-fa91-4385-bbcd-bf459bbdcd85", 00:16:54.719 "strip_size_kb": 0, 00:16:54.719 "state": "online", 00:16:54.719 "raid_level": "raid1", 00:16:54.719 "superblock": true, 00:16:54.719 "num_base_bdevs": 2, 00:16:54.719 "num_base_bdevs_discovered": 2, 00:16:54.719 "num_base_bdevs_operational": 2, 00:16:54.719 "base_bdevs_list": [ 00:16:54.719 { 00:16:54.719 "name": "BaseBdev1", 00:16:54.719 "uuid": "7476b58f-855f-4a0e-b538-a3a454db99ee", 00:16:54.719 "is_configured": true, 00:16:54.719 "data_offset": 256, 00:16:54.719 "data_size": 7936 00:16:54.719 }, 00:16:54.719 { 00:16:54.719 "name": "BaseBdev2", 00:16:54.719 "uuid": "ead7ec1f-9bc5-433a-a8f2-559f0f68ceff", 00:16:54.719 "is_configured": true, 00:16:54.719 "data_offset": 256, 00:16:54.719 "data_size": 7936 00:16:54.719 } 00:16:54.719 ] 00:16:54.719 } 00:16:54.719 } 00:16:54.719 }' 00:16:54.719 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:54.979 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:54.979 BaseBdev2' 00:16:54.979 09:54:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.979 [2024-12-06 09:54:20.134234] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.979 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.239 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.239 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.239 "name": "Existed_Raid", 00:16:55.239 "uuid": "97379e11-fa91-4385-bbcd-bf459bbdcd85", 00:16:55.239 "strip_size_kb": 0, 00:16:55.239 "state": "online", 00:16:55.239 "raid_level": "raid1", 00:16:55.239 "superblock": true, 00:16:55.239 "num_base_bdevs": 2, 00:16:55.239 "num_base_bdevs_discovered": 1, 00:16:55.239 "num_base_bdevs_operational": 1, 00:16:55.239 "base_bdevs_list": [ 00:16:55.239 { 00:16:55.239 "name": null, 00:16:55.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.239 "is_configured": false, 00:16:55.239 "data_offset": 0, 00:16:55.239 "data_size": 7936 00:16:55.239 }, 00:16:55.239 { 00:16:55.239 "name": "BaseBdev2", 00:16:55.239 "uuid": "ead7ec1f-9bc5-433a-a8f2-559f0f68ceff", 00:16:55.239 "is_configured": true, 00:16:55.239 "data_offset": 256, 00:16:55.239 "data_size": 7936 00:16:55.239 } 00:16:55.239 ] 00:16:55.239 }' 00:16:55.239 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.239 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.498 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:55.498 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:55.498 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.498 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.498 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.498 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:55.498 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.498 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:55.498 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:55.498 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:55.498 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.498 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.498 [2024-12-06 09:54:20.745320] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:55.498 [2024-12-06 09:54:20.745447] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:55.758 [2024-12-06 09:54:20.846687] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:55.758 [2024-12-06 09:54:20.846749] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:55.758 [2024-12-06 09:54:20.846762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:55.758 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.758 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:55.758 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:55.758 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:55.758 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.758 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.758 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.758 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.758 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:55.758 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:55.758 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:55.758 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85797 00:16:55.758 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 85797 ']' 00:16:55.758 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 85797 00:16:55.758 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:16:55.758 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:55.758 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85797 00:16:55.758 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:55.758 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:55.758 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85797' 00:16:55.758 killing process with pid 85797 00:16:55.758 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 85797 00:16:55.758 [2024-12-06 09:54:20.933876] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:55.758 09:54:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 85797 00:16:55.758 [2024-12-06 09:54:20.951617] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:57.138 09:54:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:16:57.138 00:16:57.138 real 0m5.175s 00:16:57.138 user 0m7.285s 00:16:57.138 sys 0m0.980s 00:16:57.138 09:54:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:57.138 ************************************ 00:16:57.138 END TEST raid_state_function_test_sb_4k 00:16:57.138 ************************************ 00:16:57.138 09:54:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.138 09:54:22 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:16:57.138 09:54:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:57.138 09:54:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:57.138 09:54:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:57.138 ************************************ 00:16:57.138 START TEST raid_superblock_test_4k 00:16:57.138 ************************************ 00:16:57.138 09:54:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:16:57.138 09:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:57.138 09:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:57.138 09:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:57.138 09:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:57.138 09:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:57.138 09:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:57.138 09:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:57.138 09:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:57.138 09:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:57.138 09:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:57.138 09:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:57.138 09:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:57.138 09:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:57.138 09:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:57.138 09:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:57.138 09:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86049 00:16:57.138 09:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:57.138 09:54:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86049 00:16:57.138 09:54:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86049 ']' 00:16:57.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.138 09:54:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.138 09:54:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:57.138 09:54:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.138 09:54:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:57.138 09:54:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.138 [2024-12-06 09:54:22.320611] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:16:57.138 [2024-12-06 09:54:22.320753] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86049 ] 00:16:57.398 [2024-12-06 09:54:22.495669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.398 [2024-12-06 09:54:22.626519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.658 [2024-12-06 09:54:22.854238] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:57.658 [2024-12-06 09:54:22.854282] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:57.917 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.917 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:16:57.917 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:57.917 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:57.917 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:57.917 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:57.917 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:57.917 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:57.917 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:57.917 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:57.918 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:16:57.918 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.918 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.177 malloc1 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.177 [2024-12-06 09:54:23.209554] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:58.177 [2024-12-06 09:54:23.209720] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.177 [2024-12-06 09:54:23.209763] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:58.177 [2024-12-06 09:54:23.209793] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.177 [2024-12-06 09:54:23.212298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.177 [2024-12-06 09:54:23.212372] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:58.177 pt1 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.177 malloc2 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.177 [2024-12-06 09:54:23.274434] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:58.177 [2024-12-06 09:54:23.274490] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.177 [2024-12-06 09:54:23.274517] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:58.177 [2024-12-06 09:54:23.274526] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.177 [2024-12-06 09:54:23.276822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.177 [2024-12-06 09:54:23.276856] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:58.177 pt2 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.177 [2024-12-06 09:54:23.286464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:58.177 [2024-12-06 09:54:23.288465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:58.177 [2024-12-06 09:54:23.288634] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:58.177 [2024-12-06 09:54:23.288651] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:58.177 [2024-12-06 09:54:23.288889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:58.177 [2024-12-06 09:54:23.289049] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:58.177 [2024-12-06 09:54:23.289065] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:58.177 [2024-12-06 09:54:23.289238] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.177 "name": "raid_bdev1", 00:16:58.177 "uuid": "49d054f5-da3d-4308-bc67-7ff9334ddfdb", 00:16:58.177 "strip_size_kb": 0, 00:16:58.177 "state": "online", 00:16:58.177 "raid_level": "raid1", 00:16:58.177 "superblock": true, 00:16:58.177 "num_base_bdevs": 2, 00:16:58.177 "num_base_bdevs_discovered": 2, 00:16:58.177 "num_base_bdevs_operational": 2, 00:16:58.177 "base_bdevs_list": [ 00:16:58.177 { 00:16:58.177 "name": "pt1", 00:16:58.177 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:58.177 "is_configured": true, 00:16:58.177 "data_offset": 256, 00:16:58.177 "data_size": 7936 00:16:58.177 }, 00:16:58.177 { 00:16:58.177 "name": "pt2", 00:16:58.177 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:58.177 "is_configured": true, 00:16:58.177 "data_offset": 256, 00:16:58.177 "data_size": 7936 00:16:58.177 } 00:16:58.177 ] 00:16:58.177 }' 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.177 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.437 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:58.437 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:58.437 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:58.437 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:58.437 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:58.437 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:58.437 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:58.437 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:58.437 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.437 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.437 [2024-12-06 09:54:23.701962] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:58.697 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.697 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:58.697 "name": "raid_bdev1", 00:16:58.697 "aliases": [ 00:16:58.697 "49d054f5-da3d-4308-bc67-7ff9334ddfdb" 00:16:58.697 ], 00:16:58.697 "product_name": "Raid Volume", 00:16:58.697 "block_size": 4096, 00:16:58.697 "num_blocks": 7936, 00:16:58.697 "uuid": "49d054f5-da3d-4308-bc67-7ff9334ddfdb", 00:16:58.697 "assigned_rate_limits": { 00:16:58.697 "rw_ios_per_sec": 0, 00:16:58.697 "rw_mbytes_per_sec": 0, 00:16:58.697 "r_mbytes_per_sec": 0, 00:16:58.697 "w_mbytes_per_sec": 0 00:16:58.697 }, 00:16:58.697 "claimed": false, 00:16:58.697 "zoned": false, 00:16:58.697 "supported_io_types": { 00:16:58.697 "read": true, 00:16:58.697 "write": true, 00:16:58.697 "unmap": false, 00:16:58.697 "flush": false, 00:16:58.697 "reset": true, 00:16:58.697 "nvme_admin": false, 00:16:58.697 "nvme_io": false, 00:16:58.697 "nvme_io_md": false, 00:16:58.697 "write_zeroes": true, 00:16:58.697 "zcopy": false, 00:16:58.697 "get_zone_info": false, 00:16:58.697 "zone_management": false, 00:16:58.697 "zone_append": false, 00:16:58.697 "compare": false, 00:16:58.697 "compare_and_write": false, 00:16:58.697 "abort": false, 00:16:58.697 "seek_hole": false, 00:16:58.697 "seek_data": false, 00:16:58.697 "copy": false, 00:16:58.697 "nvme_iov_md": false 00:16:58.697 }, 00:16:58.697 "memory_domains": [ 00:16:58.697 { 00:16:58.697 "dma_device_id": "system", 00:16:58.697 "dma_device_type": 1 00:16:58.697 }, 00:16:58.697 { 00:16:58.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.697 "dma_device_type": 2 00:16:58.697 }, 00:16:58.697 { 00:16:58.697 "dma_device_id": "system", 00:16:58.697 "dma_device_type": 1 00:16:58.697 }, 00:16:58.697 { 00:16:58.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.698 "dma_device_type": 2 00:16:58.698 } 00:16:58.698 ], 00:16:58.698 "driver_specific": { 00:16:58.698 "raid": { 00:16:58.698 "uuid": "49d054f5-da3d-4308-bc67-7ff9334ddfdb", 00:16:58.698 "strip_size_kb": 0, 00:16:58.698 "state": "online", 00:16:58.698 "raid_level": "raid1", 00:16:58.698 "superblock": true, 00:16:58.698 "num_base_bdevs": 2, 00:16:58.698 "num_base_bdevs_discovered": 2, 00:16:58.698 "num_base_bdevs_operational": 2, 00:16:58.698 "base_bdevs_list": [ 00:16:58.698 { 00:16:58.698 "name": "pt1", 00:16:58.698 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:58.698 "is_configured": true, 00:16:58.698 "data_offset": 256, 00:16:58.698 "data_size": 7936 00:16:58.698 }, 00:16:58.698 { 00:16:58.698 "name": "pt2", 00:16:58.698 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:58.698 "is_configured": true, 00:16:58.698 "data_offset": 256, 00:16:58.698 "data_size": 7936 00:16:58.698 } 00:16:58.698 ] 00:16:58.698 } 00:16:58.698 } 00:16:58.698 }' 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:58.698 pt2' 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.698 [2024-12-06 09:54:23.917569] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=49d054f5-da3d-4308-bc67-7ff9334ddfdb 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 49d054f5-da3d-4308-bc67-7ff9334ddfdb ']' 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.698 [2024-12-06 09:54:23.945267] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:58.698 [2024-12-06 09:54:23.945332] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:58.698 [2024-12-06 09:54:23.945410] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:58.698 [2024-12-06 09:54:23.945464] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:58.698 [2024-12-06 09:54:23.945476] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:58.698 09:54:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.958 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:58.958 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:58.958 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:58.958 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:58.958 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.958 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.958 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.958 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:58.958 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:58.958 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.958 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.958 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.958 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:58.958 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:58.958 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.958 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.958 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.958 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:58.958 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:58.958 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:16:58.958 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:58.958 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:58.958 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:58.958 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:58.958 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:58.958 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:58.958 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.959 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.959 [2024-12-06 09:54:24.081082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:58.959 [2024-12-06 09:54:24.083143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:58.959 [2024-12-06 09:54:24.083226] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:58.959 [2024-12-06 09:54:24.083276] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:58.959 [2024-12-06 09:54:24.083289] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:58.959 [2024-12-06 09:54:24.083298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:58.959 request: 00:16:58.959 { 00:16:58.959 "name": "raid_bdev1", 00:16:58.959 "raid_level": "raid1", 00:16:58.959 "base_bdevs": [ 00:16:58.959 "malloc1", 00:16:58.959 "malloc2" 00:16:58.959 ], 00:16:58.959 "superblock": false, 00:16:58.959 "method": "bdev_raid_create", 00:16:58.959 "req_id": 1 00:16:58.959 } 00:16:58.959 Got JSON-RPC error response 00:16:58.959 response: 00:16:58.959 { 00:16:58.959 "code": -17, 00:16:58.959 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:58.959 } 00:16:58.959 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:58.959 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:16:58.959 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:58.959 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:58.959 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:58.959 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.959 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:58.959 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.959 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.959 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.959 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:58.959 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:58.959 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:58.959 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.959 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.959 [2024-12-06 09:54:24.144962] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:58.959 [2024-12-06 09:54:24.145056] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.959 [2024-12-06 09:54:24.145092] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:58.959 [2024-12-06 09:54:24.145129] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.959 [2024-12-06 09:54:24.147504] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.959 [2024-12-06 09:54:24.147582] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:58.959 [2024-12-06 09:54:24.147677] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:58.959 [2024-12-06 09:54:24.147755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:58.959 pt1 00:16:58.959 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.959 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:58.959 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.959 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:58.959 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:58.959 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:58.959 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:58.959 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.959 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.959 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.959 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.959 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.959 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.959 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.959 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.959 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.959 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.959 "name": "raid_bdev1", 00:16:58.959 "uuid": "49d054f5-da3d-4308-bc67-7ff9334ddfdb", 00:16:58.959 "strip_size_kb": 0, 00:16:58.959 "state": "configuring", 00:16:58.959 "raid_level": "raid1", 00:16:58.959 "superblock": true, 00:16:58.959 "num_base_bdevs": 2, 00:16:58.959 "num_base_bdevs_discovered": 1, 00:16:58.959 "num_base_bdevs_operational": 2, 00:16:58.959 "base_bdevs_list": [ 00:16:58.959 { 00:16:58.959 "name": "pt1", 00:16:58.959 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:58.959 "is_configured": true, 00:16:58.959 "data_offset": 256, 00:16:58.959 "data_size": 7936 00:16:58.959 }, 00:16:58.959 { 00:16:58.959 "name": null, 00:16:58.959 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:58.959 "is_configured": false, 00:16:58.959 "data_offset": 256, 00:16:58.959 "data_size": 7936 00:16:58.959 } 00:16:58.959 ] 00:16:58.959 }' 00:16:58.959 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.959 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.529 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:59.529 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:59.529 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:59.529 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:59.529 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.529 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.529 [2024-12-06 09:54:24.620166] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:59.529 [2024-12-06 09:54:24.620224] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.529 [2024-12-06 09:54:24.620243] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:59.529 [2024-12-06 09:54:24.620253] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.529 [2024-12-06 09:54:24.620660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.529 [2024-12-06 09:54:24.620681] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:59.529 [2024-12-06 09:54:24.620742] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:59.529 [2024-12-06 09:54:24.620768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:59.529 [2024-12-06 09:54:24.620888] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:59.529 [2024-12-06 09:54:24.620899] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:59.529 [2024-12-06 09:54:24.621169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:59.529 [2024-12-06 09:54:24.621332] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:59.529 [2024-12-06 09:54:24.621341] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:59.529 [2024-12-06 09:54:24.621469] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.529 pt2 00:16:59.529 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.529 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:59.529 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:59.529 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:59.529 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.529 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.529 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.529 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.529 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:59.529 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.529 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.529 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.529 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.529 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.530 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.530 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.530 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.530 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.530 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.530 "name": "raid_bdev1", 00:16:59.530 "uuid": "49d054f5-da3d-4308-bc67-7ff9334ddfdb", 00:16:59.530 "strip_size_kb": 0, 00:16:59.530 "state": "online", 00:16:59.530 "raid_level": "raid1", 00:16:59.530 "superblock": true, 00:16:59.530 "num_base_bdevs": 2, 00:16:59.530 "num_base_bdevs_discovered": 2, 00:16:59.530 "num_base_bdevs_operational": 2, 00:16:59.530 "base_bdevs_list": [ 00:16:59.530 { 00:16:59.530 "name": "pt1", 00:16:59.530 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:59.530 "is_configured": true, 00:16:59.530 "data_offset": 256, 00:16:59.530 "data_size": 7936 00:16:59.530 }, 00:16:59.530 { 00:16:59.530 "name": "pt2", 00:16:59.530 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.530 "is_configured": true, 00:16:59.530 "data_offset": 256, 00:16:59.530 "data_size": 7936 00:16:59.530 } 00:16:59.530 ] 00:16:59.530 }' 00:16:59.530 09:54:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.530 09:54:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.099 [2024-12-06 09:54:25.107512] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:00.099 "name": "raid_bdev1", 00:17:00.099 "aliases": [ 00:17:00.099 "49d054f5-da3d-4308-bc67-7ff9334ddfdb" 00:17:00.099 ], 00:17:00.099 "product_name": "Raid Volume", 00:17:00.099 "block_size": 4096, 00:17:00.099 "num_blocks": 7936, 00:17:00.099 "uuid": "49d054f5-da3d-4308-bc67-7ff9334ddfdb", 00:17:00.099 "assigned_rate_limits": { 00:17:00.099 "rw_ios_per_sec": 0, 00:17:00.099 "rw_mbytes_per_sec": 0, 00:17:00.099 "r_mbytes_per_sec": 0, 00:17:00.099 "w_mbytes_per_sec": 0 00:17:00.099 }, 00:17:00.099 "claimed": false, 00:17:00.099 "zoned": false, 00:17:00.099 "supported_io_types": { 00:17:00.099 "read": true, 00:17:00.099 "write": true, 00:17:00.099 "unmap": false, 00:17:00.099 "flush": false, 00:17:00.099 "reset": true, 00:17:00.099 "nvme_admin": false, 00:17:00.099 "nvme_io": false, 00:17:00.099 "nvme_io_md": false, 00:17:00.099 "write_zeroes": true, 00:17:00.099 "zcopy": false, 00:17:00.099 "get_zone_info": false, 00:17:00.099 "zone_management": false, 00:17:00.099 "zone_append": false, 00:17:00.099 "compare": false, 00:17:00.099 "compare_and_write": false, 00:17:00.099 "abort": false, 00:17:00.099 "seek_hole": false, 00:17:00.099 "seek_data": false, 00:17:00.099 "copy": false, 00:17:00.099 "nvme_iov_md": false 00:17:00.099 }, 00:17:00.099 "memory_domains": [ 00:17:00.099 { 00:17:00.099 "dma_device_id": "system", 00:17:00.099 "dma_device_type": 1 00:17:00.099 }, 00:17:00.099 { 00:17:00.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.099 "dma_device_type": 2 00:17:00.099 }, 00:17:00.099 { 00:17:00.099 "dma_device_id": "system", 00:17:00.099 "dma_device_type": 1 00:17:00.099 }, 00:17:00.099 { 00:17:00.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.099 "dma_device_type": 2 00:17:00.099 } 00:17:00.099 ], 00:17:00.099 "driver_specific": { 00:17:00.099 "raid": { 00:17:00.099 "uuid": "49d054f5-da3d-4308-bc67-7ff9334ddfdb", 00:17:00.099 "strip_size_kb": 0, 00:17:00.099 "state": "online", 00:17:00.099 "raid_level": "raid1", 00:17:00.099 "superblock": true, 00:17:00.099 "num_base_bdevs": 2, 00:17:00.099 "num_base_bdevs_discovered": 2, 00:17:00.099 "num_base_bdevs_operational": 2, 00:17:00.099 "base_bdevs_list": [ 00:17:00.099 { 00:17:00.099 "name": "pt1", 00:17:00.099 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:00.099 "is_configured": true, 00:17:00.099 "data_offset": 256, 00:17:00.099 "data_size": 7936 00:17:00.099 }, 00:17:00.099 { 00:17:00.099 "name": "pt2", 00:17:00.099 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:00.099 "is_configured": true, 00:17:00.099 "data_offset": 256, 00:17:00.099 "data_size": 7936 00:17:00.099 } 00:17:00.099 ] 00:17:00.099 } 00:17:00.099 } 00:17:00.099 }' 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:00.099 pt2' 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:00.099 [2024-12-06 09:54:25.339175] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:00.099 09:54:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.358 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 49d054f5-da3d-4308-bc67-7ff9334ddfdb '!=' 49d054f5-da3d-4308-bc67-7ff9334ddfdb ']' 00:17:00.358 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:00.358 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:00.358 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:00.358 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:00.358 09:54:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.358 09:54:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.358 [2024-12-06 09:54:25.390908] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:00.358 09:54:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.358 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:00.358 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.358 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.358 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:00.358 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:00.358 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:00.358 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.358 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.358 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.358 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.358 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.358 09:54:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.359 09:54:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.359 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.359 09:54:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.359 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.359 "name": "raid_bdev1", 00:17:00.359 "uuid": "49d054f5-da3d-4308-bc67-7ff9334ddfdb", 00:17:00.359 "strip_size_kb": 0, 00:17:00.359 "state": "online", 00:17:00.359 "raid_level": "raid1", 00:17:00.359 "superblock": true, 00:17:00.359 "num_base_bdevs": 2, 00:17:00.359 "num_base_bdevs_discovered": 1, 00:17:00.359 "num_base_bdevs_operational": 1, 00:17:00.359 "base_bdevs_list": [ 00:17:00.359 { 00:17:00.359 "name": null, 00:17:00.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.359 "is_configured": false, 00:17:00.359 "data_offset": 0, 00:17:00.359 "data_size": 7936 00:17:00.359 }, 00:17:00.359 { 00:17:00.359 "name": "pt2", 00:17:00.359 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:00.359 "is_configured": true, 00:17:00.359 "data_offset": 256, 00:17:00.359 "data_size": 7936 00:17:00.359 } 00:17:00.359 ] 00:17:00.359 }' 00:17:00.359 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.359 09:54:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.618 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:00.618 09:54:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.618 09:54:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.618 [2024-12-06 09:54:25.874196] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:00.618 [2024-12-06 09:54:25.874315] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:00.618 [2024-12-06 09:54:25.874436] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.618 [2024-12-06 09:54:25.874513] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.618 [2024-12-06 09:54:25.874566] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:00.618 09:54:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.618 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:00.618 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.618 09:54:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.618 09:54:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.878 [2024-12-06 09:54:25.930018] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:00.878 [2024-12-06 09:54:25.930080] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:00.878 [2024-12-06 09:54:25.930098] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:00.878 [2024-12-06 09:54:25.930109] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:00.878 [2024-12-06 09:54:25.932643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:00.878 [2024-12-06 09:54:25.932685] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:00.878 [2024-12-06 09:54:25.932770] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:00.878 [2024-12-06 09:54:25.932821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:00.878 [2024-12-06 09:54:25.932927] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:00.878 [2024-12-06 09:54:25.932939] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:00.878 [2024-12-06 09:54:25.933199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:00.878 [2024-12-06 09:54:25.933372] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:00.878 [2024-12-06 09:54:25.933381] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:00.878 [2024-12-06 09:54:25.933528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:00.878 pt2 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.878 "name": "raid_bdev1", 00:17:00.878 "uuid": "49d054f5-da3d-4308-bc67-7ff9334ddfdb", 00:17:00.878 "strip_size_kb": 0, 00:17:00.878 "state": "online", 00:17:00.878 "raid_level": "raid1", 00:17:00.878 "superblock": true, 00:17:00.878 "num_base_bdevs": 2, 00:17:00.878 "num_base_bdevs_discovered": 1, 00:17:00.878 "num_base_bdevs_operational": 1, 00:17:00.878 "base_bdevs_list": [ 00:17:00.878 { 00:17:00.878 "name": null, 00:17:00.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.878 "is_configured": false, 00:17:00.878 "data_offset": 256, 00:17:00.878 "data_size": 7936 00:17:00.878 }, 00:17:00.878 { 00:17:00.878 "name": "pt2", 00:17:00.878 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:00.878 "is_configured": true, 00:17:00.878 "data_offset": 256, 00:17:00.878 "data_size": 7936 00:17:00.878 } 00:17:00.878 ] 00:17:00.878 }' 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.878 09:54:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.139 09:54:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:01.139 09:54:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.139 09:54:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.139 [2024-12-06 09:54:26.349254] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:01.139 [2024-12-06 09:54:26.349343] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:01.139 [2024-12-06 09:54:26.349411] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:01.139 [2024-12-06 09:54:26.349475] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:01.139 [2024-12-06 09:54:26.349515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:01.139 09:54:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.139 09:54:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:01.139 09:54:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.139 09:54:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.139 09:54:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.139 09:54:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.139 09:54:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:01.139 09:54:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:01.139 09:54:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:01.139 09:54:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:01.139 09:54:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.139 09:54:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.139 [2024-12-06 09:54:26.393257] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:01.139 [2024-12-06 09:54:26.393339] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.139 [2024-12-06 09:54:26.393371] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:01.139 [2024-12-06 09:54:26.393402] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.139 [2024-12-06 09:54:26.395745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.139 [2024-12-06 09:54:26.395823] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:01.139 [2024-12-06 09:54:26.395926] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:01.139 [2024-12-06 09:54:26.396011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:01.139 [2024-12-06 09:54:26.396211] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:01.139 [2024-12-06 09:54:26.396267] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:01.139 [2024-12-06 09:54:26.396302] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:01.139 [2024-12-06 09:54:26.396413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:01.139 [2024-12-06 09:54:26.396521] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:01.139 [2024-12-06 09:54:26.396555] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:01.139 [2024-12-06 09:54:26.396804] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:01.139 [2024-12-06 09:54:26.396981] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:01.139 [2024-12-06 09:54:26.397025] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:01.139 [2024-12-06 09:54:26.397248] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.139 pt1 00:17:01.139 09:54:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.140 09:54:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:01.140 09:54:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:01.140 09:54:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.140 09:54:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.140 09:54:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.140 09:54:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.140 09:54:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:01.140 09:54:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.140 09:54:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.140 09:54:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.140 09:54:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.140 09:54:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.140 09:54:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.140 09:54:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.140 09:54:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.398 09:54:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.398 09:54:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.398 "name": "raid_bdev1", 00:17:01.398 "uuid": "49d054f5-da3d-4308-bc67-7ff9334ddfdb", 00:17:01.398 "strip_size_kb": 0, 00:17:01.398 "state": "online", 00:17:01.398 "raid_level": "raid1", 00:17:01.398 "superblock": true, 00:17:01.398 "num_base_bdevs": 2, 00:17:01.398 "num_base_bdevs_discovered": 1, 00:17:01.398 "num_base_bdevs_operational": 1, 00:17:01.398 "base_bdevs_list": [ 00:17:01.398 { 00:17:01.398 "name": null, 00:17:01.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.398 "is_configured": false, 00:17:01.398 "data_offset": 256, 00:17:01.398 "data_size": 7936 00:17:01.398 }, 00:17:01.398 { 00:17:01.398 "name": "pt2", 00:17:01.398 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:01.398 "is_configured": true, 00:17:01.398 "data_offset": 256, 00:17:01.398 "data_size": 7936 00:17:01.398 } 00:17:01.398 ] 00:17:01.398 }' 00:17:01.398 09:54:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.398 09:54:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.657 09:54:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:01.657 09:54:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:01.657 09:54:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.657 09:54:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.657 09:54:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.657 09:54:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:01.657 09:54:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:01.657 09:54:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.657 09:54:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.657 09:54:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:01.657 [2024-12-06 09:54:26.900621] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:01.657 09:54:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.917 09:54:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 49d054f5-da3d-4308-bc67-7ff9334ddfdb '!=' 49d054f5-da3d-4308-bc67-7ff9334ddfdb ']' 00:17:01.917 09:54:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86049 00:17:01.917 09:54:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86049 ']' 00:17:01.917 09:54:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86049 00:17:01.917 09:54:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:17:01.917 09:54:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:01.917 09:54:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86049 00:17:01.917 09:54:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:01.917 killing process with pid 86049 00:17:01.917 09:54:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:01.917 09:54:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86049' 00:17:01.917 09:54:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86049 00:17:01.917 [2024-12-06 09:54:26.986854] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:01.917 [2024-12-06 09:54:26.986921] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:01.917 [2024-12-06 09:54:26.986954] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:01.917 [2024-12-06 09:54:26.986967] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:01.917 09:54:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86049 00:17:02.176 [2024-12-06 09:54:27.207113] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:03.555 09:54:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:17:03.555 00:17:03.555 real 0m6.188s 00:17:03.555 user 0m9.181s 00:17:03.555 sys 0m1.157s 00:17:03.555 09:54:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:03.555 09:54:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.555 ************************************ 00:17:03.555 END TEST raid_superblock_test_4k 00:17:03.555 ************************************ 00:17:03.555 09:54:28 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:17:03.555 09:54:28 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:17:03.555 09:54:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:03.555 09:54:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:03.555 09:54:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:03.555 ************************************ 00:17:03.555 START TEST raid_rebuild_test_sb_4k 00:17:03.555 ************************************ 00:17:03.555 09:54:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:03.555 09:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:03.555 09:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:03.555 09:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:03.555 09:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:03.555 09:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:03.555 09:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:03.555 09:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:03.555 09:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:03.555 09:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:03.555 09:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:03.555 09:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:03.555 09:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:03.555 09:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:03.555 09:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:03.555 09:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:03.555 09:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:03.555 09:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:03.555 09:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:03.555 09:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:03.555 09:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:03.555 09:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:03.555 09:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:03.555 09:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:03.555 09:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:03.555 09:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86379 00:17:03.555 09:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:03.555 09:54:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86379 00:17:03.555 09:54:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86379 ']' 00:17:03.555 09:54:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.555 09:54:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:03.555 09:54:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.555 09:54:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:03.555 09:54:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.555 [2024-12-06 09:54:28.592349] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:17:03.555 [2024-12-06 09:54:28.592519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:03.555 Zero copy mechanism will not be used. 00:17:03.556 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86379 ] 00:17:03.556 [2024-12-06 09:54:28.767784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.815 [2024-12-06 09:54:28.907445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.075 [2024-12-06 09:54:29.140745] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:04.075 [2024-12-06 09:54:29.140906] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:04.334 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:04.334 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:04.334 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:04.334 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:17:04.334 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.334 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.334 BaseBdev1_malloc 00:17:04.334 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.334 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:04.334 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.335 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.335 [2024-12-06 09:54:29.474611] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:04.335 [2024-12-06 09:54:29.474763] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.335 [2024-12-06 09:54:29.474792] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:04.335 [2024-12-06 09:54:29.474805] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.335 [2024-12-06 09:54:29.477232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.335 [2024-12-06 09:54:29.477270] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:04.335 BaseBdev1 00:17:04.335 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.335 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:04.335 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:17:04.335 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.335 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.335 BaseBdev2_malloc 00:17:04.335 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.335 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:04.335 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.335 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.335 [2024-12-06 09:54:29.535435] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:04.335 [2024-12-06 09:54:29.535500] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.335 [2024-12-06 09:54:29.535526] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:04.335 [2024-12-06 09:54:29.535538] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.335 [2024-12-06 09:54:29.537844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.335 [2024-12-06 09:54:29.537950] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:04.335 BaseBdev2 00:17:04.335 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.335 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:17:04.335 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.335 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.335 spare_malloc 00:17:04.335 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.335 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:04.335 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.335 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.594 spare_delay 00:17:04.594 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.594 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:04.594 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.594 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.594 [2024-12-06 09:54:29.621201] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:04.594 [2024-12-06 09:54:29.621351] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.594 [2024-12-06 09:54:29.621374] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:04.594 [2024-12-06 09:54:29.621385] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.594 [2024-12-06 09:54:29.623701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.594 [2024-12-06 09:54:29.623742] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:04.594 spare 00:17:04.594 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.594 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:04.594 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.594 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.594 [2024-12-06 09:54:29.633250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:04.594 [2024-12-06 09:54:29.635267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:04.594 [2024-12-06 09:54:29.635460] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:04.594 [2024-12-06 09:54:29.635475] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:04.594 [2024-12-06 09:54:29.635713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:04.594 [2024-12-06 09:54:29.635897] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:04.594 [2024-12-06 09:54:29.635915] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:04.594 [2024-12-06 09:54:29.636067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.594 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.594 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:04.594 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.594 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.594 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.594 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.594 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:04.594 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.594 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.594 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.594 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.594 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.594 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.594 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.594 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.594 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.594 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.594 "name": "raid_bdev1", 00:17:04.594 "uuid": "ac974092-e528-4723-98b6-f2c4b2d264e4", 00:17:04.594 "strip_size_kb": 0, 00:17:04.594 "state": "online", 00:17:04.594 "raid_level": "raid1", 00:17:04.594 "superblock": true, 00:17:04.594 "num_base_bdevs": 2, 00:17:04.594 "num_base_bdevs_discovered": 2, 00:17:04.594 "num_base_bdevs_operational": 2, 00:17:04.594 "base_bdevs_list": [ 00:17:04.594 { 00:17:04.594 "name": "BaseBdev1", 00:17:04.594 "uuid": "0ba564f7-af46-5727-9e7a-f284f75c18b2", 00:17:04.594 "is_configured": true, 00:17:04.594 "data_offset": 256, 00:17:04.594 "data_size": 7936 00:17:04.594 }, 00:17:04.594 { 00:17:04.594 "name": "BaseBdev2", 00:17:04.594 "uuid": "7ae7704c-fe5a-54ef-b4a2-dd1bae070ef2", 00:17:04.594 "is_configured": true, 00:17:04.595 "data_offset": 256, 00:17:04.595 "data_size": 7936 00:17:04.595 } 00:17:04.595 ] 00:17:04.595 }' 00:17:04.595 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.595 09:54:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.854 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:04.854 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:04.854 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.854 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.854 [2024-12-06 09:54:30.068798] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:04.854 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.854 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:04.854 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.855 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:04.855 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.855 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.115 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.115 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:05.115 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:05.115 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:05.115 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:05.115 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:05.115 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:05.115 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:05.115 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:05.115 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:05.115 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:05.115 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:05.115 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:05.115 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:05.115 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:05.115 [2024-12-06 09:54:30.348138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:05.115 /dev/nbd0 00:17:05.115 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:05.375 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:05.375 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:05.375 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:05.375 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:05.375 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:05.375 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:05.375 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:05.375 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:05.375 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:05.375 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:05.375 1+0 records in 00:17:05.375 1+0 records out 00:17:05.375 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004865 s, 8.4 MB/s 00:17:05.375 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.375 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:05.375 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:05.375 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:05.375 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:05.375 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:05.375 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:05.375 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:05.375 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:05.375 09:54:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:05.945 7936+0 records in 00:17:05.945 7936+0 records out 00:17:05.945 32505856 bytes (33 MB, 31 MiB) copied, 0.630491 s, 51.6 MB/s 00:17:05.945 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:05.945 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:05.945 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:05.945 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:05.945 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:05.945 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:05.945 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:06.206 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:06.206 [2024-12-06 09:54:31.260396] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.206 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:06.206 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:06.206 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:06.206 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:06.206 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:06.206 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:06.206 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:06.206 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:06.206 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.206 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.206 [2024-12-06 09:54:31.279073] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:06.206 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.206 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:06.206 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.206 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.206 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:06.206 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:06.206 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:06.206 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.206 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.206 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.206 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.206 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.206 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.206 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.206 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.206 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.206 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.206 "name": "raid_bdev1", 00:17:06.206 "uuid": "ac974092-e528-4723-98b6-f2c4b2d264e4", 00:17:06.206 "strip_size_kb": 0, 00:17:06.206 "state": "online", 00:17:06.206 "raid_level": "raid1", 00:17:06.206 "superblock": true, 00:17:06.206 "num_base_bdevs": 2, 00:17:06.206 "num_base_bdevs_discovered": 1, 00:17:06.206 "num_base_bdevs_operational": 1, 00:17:06.206 "base_bdevs_list": [ 00:17:06.206 { 00:17:06.206 "name": null, 00:17:06.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.206 "is_configured": false, 00:17:06.206 "data_offset": 0, 00:17:06.206 "data_size": 7936 00:17:06.206 }, 00:17:06.206 { 00:17:06.206 "name": "BaseBdev2", 00:17:06.206 "uuid": "7ae7704c-fe5a-54ef-b4a2-dd1bae070ef2", 00:17:06.206 "is_configured": true, 00:17:06.206 "data_offset": 256, 00:17:06.206 "data_size": 7936 00:17:06.206 } 00:17:06.206 ] 00:17:06.206 }' 00:17:06.206 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.206 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.790 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:06.790 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.790 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.790 [2024-12-06 09:54:31.770248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:06.790 [2024-12-06 09:54:31.788661] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:06.790 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.790 09:54:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:06.791 [2024-12-06 09:54:31.790762] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:07.728 09:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:07.728 09:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:07.728 09:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:07.729 09:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:07.729 09:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:07.729 09:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.729 09:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.729 09:54:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.729 09:54:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.729 09:54:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.729 09:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.729 "name": "raid_bdev1", 00:17:07.729 "uuid": "ac974092-e528-4723-98b6-f2c4b2d264e4", 00:17:07.729 "strip_size_kb": 0, 00:17:07.729 "state": "online", 00:17:07.729 "raid_level": "raid1", 00:17:07.729 "superblock": true, 00:17:07.729 "num_base_bdevs": 2, 00:17:07.729 "num_base_bdevs_discovered": 2, 00:17:07.729 "num_base_bdevs_operational": 2, 00:17:07.729 "process": { 00:17:07.729 "type": "rebuild", 00:17:07.729 "target": "spare", 00:17:07.729 "progress": { 00:17:07.729 "blocks": 2560, 00:17:07.729 "percent": 32 00:17:07.729 } 00:17:07.729 }, 00:17:07.729 "base_bdevs_list": [ 00:17:07.729 { 00:17:07.729 "name": "spare", 00:17:07.729 "uuid": "69c6ce43-4c7d-5df2-bcb4-fb2207128cc0", 00:17:07.729 "is_configured": true, 00:17:07.729 "data_offset": 256, 00:17:07.729 "data_size": 7936 00:17:07.729 }, 00:17:07.729 { 00:17:07.729 "name": "BaseBdev2", 00:17:07.729 "uuid": "7ae7704c-fe5a-54ef-b4a2-dd1bae070ef2", 00:17:07.729 "is_configured": true, 00:17:07.729 "data_offset": 256, 00:17:07.729 "data_size": 7936 00:17:07.729 } 00:17:07.729 ] 00:17:07.729 }' 00:17:07.729 09:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:07.729 09:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:07.729 09:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.729 09:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:07.729 09:54:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:07.729 09:54:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.729 09:54:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.729 [2024-12-06 09:54:32.949872] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:07.729 [2024-12-06 09:54:32.999482] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:07.729 [2024-12-06 09:54:32.999550] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.729 [2024-12-06 09:54:32.999564] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:07.729 [2024-12-06 09:54:32.999575] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:07.988 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.988 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:07.988 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.988 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.988 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:07.988 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:07.988 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:07.988 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.988 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.988 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.988 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.988 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.988 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.988 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.988 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.988 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.988 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.988 "name": "raid_bdev1", 00:17:07.988 "uuid": "ac974092-e528-4723-98b6-f2c4b2d264e4", 00:17:07.988 "strip_size_kb": 0, 00:17:07.988 "state": "online", 00:17:07.988 "raid_level": "raid1", 00:17:07.988 "superblock": true, 00:17:07.988 "num_base_bdevs": 2, 00:17:07.988 "num_base_bdevs_discovered": 1, 00:17:07.988 "num_base_bdevs_operational": 1, 00:17:07.988 "base_bdevs_list": [ 00:17:07.988 { 00:17:07.988 "name": null, 00:17:07.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.988 "is_configured": false, 00:17:07.988 "data_offset": 0, 00:17:07.988 "data_size": 7936 00:17:07.988 }, 00:17:07.988 { 00:17:07.988 "name": "BaseBdev2", 00:17:07.988 "uuid": "7ae7704c-fe5a-54ef-b4a2-dd1bae070ef2", 00:17:07.988 "is_configured": true, 00:17:07.988 "data_offset": 256, 00:17:07.988 "data_size": 7936 00:17:07.988 } 00:17:07.988 ] 00:17:07.988 }' 00:17:07.988 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.988 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.246 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:08.246 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.246 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:08.246 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:08.246 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.246 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.247 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.247 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.247 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.247 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.247 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.247 "name": "raid_bdev1", 00:17:08.247 "uuid": "ac974092-e528-4723-98b6-f2c4b2d264e4", 00:17:08.247 "strip_size_kb": 0, 00:17:08.247 "state": "online", 00:17:08.247 "raid_level": "raid1", 00:17:08.247 "superblock": true, 00:17:08.247 "num_base_bdevs": 2, 00:17:08.247 "num_base_bdevs_discovered": 1, 00:17:08.247 "num_base_bdevs_operational": 1, 00:17:08.247 "base_bdevs_list": [ 00:17:08.247 { 00:17:08.247 "name": null, 00:17:08.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.247 "is_configured": false, 00:17:08.247 "data_offset": 0, 00:17:08.247 "data_size": 7936 00:17:08.247 }, 00:17:08.247 { 00:17:08.247 "name": "BaseBdev2", 00:17:08.247 "uuid": "7ae7704c-fe5a-54ef-b4a2-dd1bae070ef2", 00:17:08.247 "is_configured": true, 00:17:08.247 "data_offset": 256, 00:17:08.247 "data_size": 7936 00:17:08.247 } 00:17:08.247 ] 00:17:08.247 }' 00:17:08.247 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.505 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:08.505 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.505 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:08.505 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:08.505 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.505 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.505 [2024-12-06 09:54:33.623571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:08.505 [2024-12-06 09:54:33.640490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:08.505 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.505 09:54:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:08.505 [2024-12-06 09:54:33.642650] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:09.438 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.438 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.438 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:09.438 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:09.438 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.438 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.438 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.438 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.438 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.438 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.438 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.438 "name": "raid_bdev1", 00:17:09.438 "uuid": "ac974092-e528-4723-98b6-f2c4b2d264e4", 00:17:09.438 "strip_size_kb": 0, 00:17:09.438 "state": "online", 00:17:09.438 "raid_level": "raid1", 00:17:09.438 "superblock": true, 00:17:09.438 "num_base_bdevs": 2, 00:17:09.438 "num_base_bdevs_discovered": 2, 00:17:09.438 "num_base_bdevs_operational": 2, 00:17:09.438 "process": { 00:17:09.438 "type": "rebuild", 00:17:09.438 "target": "spare", 00:17:09.438 "progress": { 00:17:09.438 "blocks": 2560, 00:17:09.438 "percent": 32 00:17:09.438 } 00:17:09.438 }, 00:17:09.438 "base_bdevs_list": [ 00:17:09.438 { 00:17:09.438 "name": "spare", 00:17:09.438 "uuid": "69c6ce43-4c7d-5df2-bcb4-fb2207128cc0", 00:17:09.438 "is_configured": true, 00:17:09.438 "data_offset": 256, 00:17:09.438 "data_size": 7936 00:17:09.438 }, 00:17:09.438 { 00:17:09.438 "name": "BaseBdev2", 00:17:09.438 "uuid": "7ae7704c-fe5a-54ef-b4a2-dd1bae070ef2", 00:17:09.438 "is_configured": true, 00:17:09.438 "data_offset": 256, 00:17:09.438 "data_size": 7936 00:17:09.438 } 00:17:09.438 ] 00:17:09.438 }' 00:17:09.438 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.697 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:09.697 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.697 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:09.697 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:09.697 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:09.697 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:09.697 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:09.697 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:09.697 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:09.697 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=668 00:17:09.697 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:09.697 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.697 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.697 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:09.697 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:09.697 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.697 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.697 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.697 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.697 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.697 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.697 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.697 "name": "raid_bdev1", 00:17:09.697 "uuid": "ac974092-e528-4723-98b6-f2c4b2d264e4", 00:17:09.697 "strip_size_kb": 0, 00:17:09.697 "state": "online", 00:17:09.697 "raid_level": "raid1", 00:17:09.697 "superblock": true, 00:17:09.697 "num_base_bdevs": 2, 00:17:09.697 "num_base_bdevs_discovered": 2, 00:17:09.697 "num_base_bdevs_operational": 2, 00:17:09.697 "process": { 00:17:09.697 "type": "rebuild", 00:17:09.697 "target": "spare", 00:17:09.697 "progress": { 00:17:09.697 "blocks": 2816, 00:17:09.697 "percent": 35 00:17:09.697 } 00:17:09.697 }, 00:17:09.697 "base_bdevs_list": [ 00:17:09.697 { 00:17:09.697 "name": "spare", 00:17:09.697 "uuid": "69c6ce43-4c7d-5df2-bcb4-fb2207128cc0", 00:17:09.697 "is_configured": true, 00:17:09.697 "data_offset": 256, 00:17:09.697 "data_size": 7936 00:17:09.697 }, 00:17:09.697 { 00:17:09.697 "name": "BaseBdev2", 00:17:09.697 "uuid": "7ae7704c-fe5a-54ef-b4a2-dd1bae070ef2", 00:17:09.697 "is_configured": true, 00:17:09.697 "data_offset": 256, 00:17:09.697 "data_size": 7936 00:17:09.697 } 00:17:09.697 ] 00:17:09.697 }' 00:17:09.697 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.697 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:09.697 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.697 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:09.697 09:54:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:11.088 09:54:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:11.088 09:54:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:11.088 09:54:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.088 09:54:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:11.088 09:54:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:11.088 09:54:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.088 09:54:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.088 09:54:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.088 09:54:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.088 09:54:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.088 09:54:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.088 09:54:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.088 "name": "raid_bdev1", 00:17:11.088 "uuid": "ac974092-e528-4723-98b6-f2c4b2d264e4", 00:17:11.088 "strip_size_kb": 0, 00:17:11.088 "state": "online", 00:17:11.088 "raid_level": "raid1", 00:17:11.088 "superblock": true, 00:17:11.088 "num_base_bdevs": 2, 00:17:11.088 "num_base_bdevs_discovered": 2, 00:17:11.088 "num_base_bdevs_operational": 2, 00:17:11.088 "process": { 00:17:11.088 "type": "rebuild", 00:17:11.088 "target": "spare", 00:17:11.088 "progress": { 00:17:11.088 "blocks": 5632, 00:17:11.088 "percent": 70 00:17:11.088 } 00:17:11.088 }, 00:17:11.088 "base_bdevs_list": [ 00:17:11.088 { 00:17:11.088 "name": "spare", 00:17:11.088 "uuid": "69c6ce43-4c7d-5df2-bcb4-fb2207128cc0", 00:17:11.088 "is_configured": true, 00:17:11.088 "data_offset": 256, 00:17:11.088 "data_size": 7936 00:17:11.088 }, 00:17:11.088 { 00:17:11.088 "name": "BaseBdev2", 00:17:11.088 "uuid": "7ae7704c-fe5a-54ef-b4a2-dd1bae070ef2", 00:17:11.088 "is_configured": true, 00:17:11.088 "data_offset": 256, 00:17:11.088 "data_size": 7936 00:17:11.088 } 00:17:11.088 ] 00:17:11.088 }' 00:17:11.088 09:54:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.088 09:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:11.088 09:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.088 09:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:11.088 09:54:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:11.662 [2024-12-06 09:54:36.764656] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:11.662 [2024-12-06 09:54:36.764813] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:11.662 [2024-12-06 09:54:36.764924] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.921 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:11.921 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:11.921 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.921 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:11.921 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:11.921 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.921 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.921 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.921 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.921 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.921 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.921 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.921 "name": "raid_bdev1", 00:17:11.921 "uuid": "ac974092-e528-4723-98b6-f2c4b2d264e4", 00:17:11.921 "strip_size_kb": 0, 00:17:11.921 "state": "online", 00:17:11.921 "raid_level": "raid1", 00:17:11.921 "superblock": true, 00:17:11.921 "num_base_bdevs": 2, 00:17:11.921 "num_base_bdevs_discovered": 2, 00:17:11.921 "num_base_bdevs_operational": 2, 00:17:11.921 "base_bdevs_list": [ 00:17:11.921 { 00:17:11.921 "name": "spare", 00:17:11.921 "uuid": "69c6ce43-4c7d-5df2-bcb4-fb2207128cc0", 00:17:11.921 "is_configured": true, 00:17:11.921 "data_offset": 256, 00:17:11.921 "data_size": 7936 00:17:11.921 }, 00:17:11.921 { 00:17:11.921 "name": "BaseBdev2", 00:17:11.921 "uuid": "7ae7704c-fe5a-54ef-b4a2-dd1bae070ef2", 00:17:11.921 "is_configured": true, 00:17:11.921 "data_offset": 256, 00:17:11.921 "data_size": 7936 00:17:11.921 } 00:17:11.921 ] 00:17:11.921 }' 00:17:11.921 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.921 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:11.921 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.180 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:12.180 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:17:12.180 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:12.180 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.180 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:12.180 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:12.180 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.180 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.180 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.180 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.180 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.180 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.180 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.180 "name": "raid_bdev1", 00:17:12.180 "uuid": "ac974092-e528-4723-98b6-f2c4b2d264e4", 00:17:12.180 "strip_size_kb": 0, 00:17:12.180 "state": "online", 00:17:12.180 "raid_level": "raid1", 00:17:12.180 "superblock": true, 00:17:12.180 "num_base_bdevs": 2, 00:17:12.180 "num_base_bdevs_discovered": 2, 00:17:12.180 "num_base_bdevs_operational": 2, 00:17:12.180 "base_bdevs_list": [ 00:17:12.180 { 00:17:12.180 "name": "spare", 00:17:12.180 "uuid": "69c6ce43-4c7d-5df2-bcb4-fb2207128cc0", 00:17:12.180 "is_configured": true, 00:17:12.180 "data_offset": 256, 00:17:12.180 "data_size": 7936 00:17:12.180 }, 00:17:12.180 { 00:17:12.180 "name": "BaseBdev2", 00:17:12.180 "uuid": "7ae7704c-fe5a-54ef-b4a2-dd1bae070ef2", 00:17:12.180 "is_configured": true, 00:17:12.180 "data_offset": 256, 00:17:12.180 "data_size": 7936 00:17:12.180 } 00:17:12.180 ] 00:17:12.180 }' 00:17:12.180 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.180 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:12.180 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.180 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:12.180 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:12.180 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:12.180 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:12.180 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:12.180 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:12.180 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:12.180 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.180 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.180 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.180 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.180 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.180 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.180 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.180 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.180 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.180 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.180 "name": "raid_bdev1", 00:17:12.180 "uuid": "ac974092-e528-4723-98b6-f2c4b2d264e4", 00:17:12.180 "strip_size_kb": 0, 00:17:12.180 "state": "online", 00:17:12.180 "raid_level": "raid1", 00:17:12.180 "superblock": true, 00:17:12.181 "num_base_bdevs": 2, 00:17:12.181 "num_base_bdevs_discovered": 2, 00:17:12.181 "num_base_bdevs_operational": 2, 00:17:12.181 "base_bdevs_list": [ 00:17:12.181 { 00:17:12.181 "name": "spare", 00:17:12.181 "uuid": "69c6ce43-4c7d-5df2-bcb4-fb2207128cc0", 00:17:12.181 "is_configured": true, 00:17:12.181 "data_offset": 256, 00:17:12.181 "data_size": 7936 00:17:12.181 }, 00:17:12.181 { 00:17:12.181 "name": "BaseBdev2", 00:17:12.181 "uuid": "7ae7704c-fe5a-54ef-b4a2-dd1bae070ef2", 00:17:12.181 "is_configured": true, 00:17:12.181 "data_offset": 256, 00:17:12.181 "data_size": 7936 00:17:12.181 } 00:17:12.181 ] 00:17:12.181 }' 00:17:12.181 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.181 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.749 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:12.749 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.749 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.749 [2024-12-06 09:54:37.815551] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:12.749 [2024-12-06 09:54:37.815633] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:12.749 [2024-12-06 09:54:37.815732] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:12.749 [2024-12-06 09:54:37.815823] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:12.749 [2024-12-06 09:54:37.815881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:12.749 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.749 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.749 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.749 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.749 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:17:12.749 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.749 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:12.749 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:12.749 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:12.749 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:12.749 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:12.749 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:12.749 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:12.749 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:12.749 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:12.750 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:12.750 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:12.750 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:12.750 09:54:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:13.010 /dev/nbd0 00:17:13.010 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:13.010 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:13.010 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:13.010 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:13.010 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:13.010 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:13.010 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:13.010 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:13.010 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:13.010 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:13.010 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:13.010 1+0 records in 00:17:13.010 1+0 records out 00:17:13.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000647269 s, 6.3 MB/s 00:17:13.010 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:13.010 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:13.010 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:13.010 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:13.010 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:13.010 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:13.010 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:13.010 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:13.270 /dev/nbd1 00:17:13.270 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:13.270 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:13.270 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:13.270 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:13.270 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:13.270 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:13.270 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:13.270 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:13.270 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:13.270 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:13.270 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:13.270 1+0 records in 00:17:13.270 1+0 records out 00:17:13.270 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299046 s, 13.7 MB/s 00:17:13.270 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:13.270 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:13.270 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:13.270 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:13.270 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:13.270 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:13.270 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:13.270 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:13.270 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:13.270 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:13.270 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:13.270 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:13.270 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:13.270 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:13.270 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:13.529 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:13.529 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:13.529 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:13.529 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:13.529 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:13.529 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:13.529 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:13.530 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:13.530 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:13.530 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:13.789 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:13.789 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:13.789 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:13.789 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:13.789 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:13.789 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:13.789 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:13.789 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:13.789 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:13.789 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:13.789 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.789 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.789 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.789 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:13.789 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.789 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.789 [2024-12-06 09:54:38.982506] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:13.789 [2024-12-06 09:54:38.982566] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.789 [2024-12-06 09:54:38.982591] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:13.789 [2024-12-06 09:54:38.982600] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.789 [2024-12-06 09:54:38.984913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.789 [2024-12-06 09:54:38.984951] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:13.789 [2024-12-06 09:54:38.985032] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:13.789 [2024-12-06 09:54:38.985082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:13.789 [2024-12-06 09:54:38.985243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:13.789 spare 00:17:13.789 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.789 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:13.789 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.789 09:54:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.049 [2024-12-06 09:54:39.085158] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:14.049 [2024-12-06 09:54:39.085188] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:14.049 [2024-12-06 09:54:39.085470] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:14.049 [2024-12-06 09:54:39.085652] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:14.049 [2024-12-06 09:54:39.085666] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:14.049 [2024-12-06 09:54:39.085819] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.049 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.049 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:14.049 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.049 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.049 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:14.049 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:14.049 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:14.049 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.049 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.049 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.049 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.049 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.049 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.049 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.049 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.049 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.049 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.049 "name": "raid_bdev1", 00:17:14.049 "uuid": "ac974092-e528-4723-98b6-f2c4b2d264e4", 00:17:14.049 "strip_size_kb": 0, 00:17:14.049 "state": "online", 00:17:14.049 "raid_level": "raid1", 00:17:14.049 "superblock": true, 00:17:14.049 "num_base_bdevs": 2, 00:17:14.049 "num_base_bdevs_discovered": 2, 00:17:14.049 "num_base_bdevs_operational": 2, 00:17:14.049 "base_bdevs_list": [ 00:17:14.049 { 00:17:14.049 "name": "spare", 00:17:14.049 "uuid": "69c6ce43-4c7d-5df2-bcb4-fb2207128cc0", 00:17:14.049 "is_configured": true, 00:17:14.049 "data_offset": 256, 00:17:14.049 "data_size": 7936 00:17:14.049 }, 00:17:14.049 { 00:17:14.049 "name": "BaseBdev2", 00:17:14.049 "uuid": "7ae7704c-fe5a-54ef-b4a2-dd1bae070ef2", 00:17:14.050 "is_configured": true, 00:17:14.050 "data_offset": 256, 00:17:14.050 "data_size": 7936 00:17:14.050 } 00:17:14.050 ] 00:17:14.050 }' 00:17:14.050 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.050 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.309 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:14.309 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.309 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:14.309 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:14.309 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.309 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.309 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.309 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.309 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.309 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.309 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.309 "name": "raid_bdev1", 00:17:14.309 "uuid": "ac974092-e528-4723-98b6-f2c4b2d264e4", 00:17:14.309 "strip_size_kb": 0, 00:17:14.309 "state": "online", 00:17:14.309 "raid_level": "raid1", 00:17:14.309 "superblock": true, 00:17:14.309 "num_base_bdevs": 2, 00:17:14.309 "num_base_bdevs_discovered": 2, 00:17:14.309 "num_base_bdevs_operational": 2, 00:17:14.309 "base_bdevs_list": [ 00:17:14.309 { 00:17:14.309 "name": "spare", 00:17:14.309 "uuid": "69c6ce43-4c7d-5df2-bcb4-fb2207128cc0", 00:17:14.309 "is_configured": true, 00:17:14.309 "data_offset": 256, 00:17:14.309 "data_size": 7936 00:17:14.309 }, 00:17:14.309 { 00:17:14.309 "name": "BaseBdev2", 00:17:14.309 "uuid": "7ae7704c-fe5a-54ef-b4a2-dd1bae070ef2", 00:17:14.309 "is_configured": true, 00:17:14.309 "data_offset": 256, 00:17:14.309 "data_size": 7936 00:17:14.309 } 00:17:14.309 ] 00:17:14.309 }' 00:17:14.309 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.309 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:14.309 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.569 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:14.569 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:14.569 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.569 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.569 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.569 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.569 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:14.569 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:14.569 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.569 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.569 [2024-12-06 09:54:39.669336] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:14.569 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.569 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:14.569 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.569 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.569 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:14.569 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:14.569 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:14.569 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.569 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.569 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.569 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.569 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.569 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.569 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.569 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.569 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.569 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.569 "name": "raid_bdev1", 00:17:14.569 "uuid": "ac974092-e528-4723-98b6-f2c4b2d264e4", 00:17:14.569 "strip_size_kb": 0, 00:17:14.569 "state": "online", 00:17:14.569 "raid_level": "raid1", 00:17:14.569 "superblock": true, 00:17:14.569 "num_base_bdevs": 2, 00:17:14.569 "num_base_bdevs_discovered": 1, 00:17:14.569 "num_base_bdevs_operational": 1, 00:17:14.569 "base_bdevs_list": [ 00:17:14.569 { 00:17:14.569 "name": null, 00:17:14.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.569 "is_configured": false, 00:17:14.569 "data_offset": 0, 00:17:14.569 "data_size": 7936 00:17:14.569 }, 00:17:14.569 { 00:17:14.569 "name": "BaseBdev2", 00:17:14.569 "uuid": "7ae7704c-fe5a-54ef-b4a2-dd1bae070ef2", 00:17:14.569 "is_configured": true, 00:17:14.569 "data_offset": 256, 00:17:14.569 "data_size": 7936 00:17:14.569 } 00:17:14.569 ] 00:17:14.569 }' 00:17:14.569 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.569 09:54:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.139 09:54:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:15.139 09:54:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.139 09:54:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.139 [2024-12-06 09:54:40.132564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:15.139 [2024-12-06 09:54:40.132765] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:15.139 [2024-12-06 09:54:40.132790] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:15.139 [2024-12-06 09:54:40.132822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:15.139 [2024-12-06 09:54:40.149715] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:15.139 09:54:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.139 09:54:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:15.139 [2024-12-06 09:54:40.151774] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:16.079 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.079 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.079 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.079 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.079 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.079 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.079 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.079 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.079 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.079 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.079 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.079 "name": "raid_bdev1", 00:17:16.079 "uuid": "ac974092-e528-4723-98b6-f2c4b2d264e4", 00:17:16.079 "strip_size_kb": 0, 00:17:16.079 "state": "online", 00:17:16.079 "raid_level": "raid1", 00:17:16.079 "superblock": true, 00:17:16.079 "num_base_bdevs": 2, 00:17:16.079 "num_base_bdevs_discovered": 2, 00:17:16.079 "num_base_bdevs_operational": 2, 00:17:16.079 "process": { 00:17:16.079 "type": "rebuild", 00:17:16.079 "target": "spare", 00:17:16.079 "progress": { 00:17:16.079 "blocks": 2560, 00:17:16.079 "percent": 32 00:17:16.079 } 00:17:16.079 }, 00:17:16.079 "base_bdevs_list": [ 00:17:16.079 { 00:17:16.079 "name": "spare", 00:17:16.079 "uuid": "69c6ce43-4c7d-5df2-bcb4-fb2207128cc0", 00:17:16.079 "is_configured": true, 00:17:16.079 "data_offset": 256, 00:17:16.079 "data_size": 7936 00:17:16.079 }, 00:17:16.079 { 00:17:16.079 "name": "BaseBdev2", 00:17:16.079 "uuid": "7ae7704c-fe5a-54ef-b4a2-dd1bae070ef2", 00:17:16.079 "is_configured": true, 00:17:16.079 "data_offset": 256, 00:17:16.079 "data_size": 7936 00:17:16.079 } 00:17:16.079 ] 00:17:16.079 }' 00:17:16.079 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.079 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:16.079 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.079 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:16.079 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:16.079 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.079 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.079 [2024-12-06 09:54:41.315645] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:16.339 [2024-12-06 09:54:41.360328] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:16.339 [2024-12-06 09:54:41.360440] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.339 [2024-12-06 09:54:41.360477] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:16.339 [2024-12-06 09:54:41.360501] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:16.339 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.339 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:16.339 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.339 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.339 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:16.339 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:16.339 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:16.339 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.339 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.339 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.339 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.339 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.339 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.339 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.339 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.339 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.339 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.339 "name": "raid_bdev1", 00:17:16.339 "uuid": "ac974092-e528-4723-98b6-f2c4b2d264e4", 00:17:16.339 "strip_size_kb": 0, 00:17:16.339 "state": "online", 00:17:16.339 "raid_level": "raid1", 00:17:16.339 "superblock": true, 00:17:16.339 "num_base_bdevs": 2, 00:17:16.339 "num_base_bdevs_discovered": 1, 00:17:16.339 "num_base_bdevs_operational": 1, 00:17:16.339 "base_bdevs_list": [ 00:17:16.339 { 00:17:16.339 "name": null, 00:17:16.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.339 "is_configured": false, 00:17:16.339 "data_offset": 0, 00:17:16.339 "data_size": 7936 00:17:16.339 }, 00:17:16.339 { 00:17:16.339 "name": "BaseBdev2", 00:17:16.339 "uuid": "7ae7704c-fe5a-54ef-b4a2-dd1bae070ef2", 00:17:16.339 "is_configured": true, 00:17:16.339 "data_offset": 256, 00:17:16.339 "data_size": 7936 00:17:16.339 } 00:17:16.339 ] 00:17:16.339 }' 00:17:16.339 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.339 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.600 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:16.600 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.600 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.600 [2024-12-06 09:54:41.863699] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:16.600 [2024-12-06 09:54:41.863803] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.600 [2024-12-06 09:54:41.863841] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:16.600 [2024-12-06 09:54:41.863872] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.600 [2024-12-06 09:54:41.864400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.600 [2024-12-06 09:54:41.864463] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:16.600 [2024-12-06 09:54:41.864555] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:16.600 [2024-12-06 09:54:41.864570] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:16.600 [2024-12-06 09:54:41.864579] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:16.600 [2024-12-06 09:54:41.864606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:16.860 spare 00:17:16.860 [2024-12-06 09:54:41.881054] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:16.860 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.860 09:54:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:16.860 [2024-12-06 09:54:41.883128] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:17.799 09:54:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.800 09:54:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.800 09:54:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.800 09:54:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.800 09:54:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.800 09:54:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.800 09:54:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.800 09:54:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.800 09:54:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.800 09:54:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.800 09:54:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.800 "name": "raid_bdev1", 00:17:17.800 "uuid": "ac974092-e528-4723-98b6-f2c4b2d264e4", 00:17:17.800 "strip_size_kb": 0, 00:17:17.800 "state": "online", 00:17:17.800 "raid_level": "raid1", 00:17:17.800 "superblock": true, 00:17:17.800 "num_base_bdevs": 2, 00:17:17.800 "num_base_bdevs_discovered": 2, 00:17:17.800 "num_base_bdevs_operational": 2, 00:17:17.800 "process": { 00:17:17.800 "type": "rebuild", 00:17:17.800 "target": "spare", 00:17:17.800 "progress": { 00:17:17.800 "blocks": 2560, 00:17:17.800 "percent": 32 00:17:17.800 } 00:17:17.800 }, 00:17:17.800 "base_bdevs_list": [ 00:17:17.800 { 00:17:17.800 "name": "spare", 00:17:17.800 "uuid": "69c6ce43-4c7d-5df2-bcb4-fb2207128cc0", 00:17:17.800 "is_configured": true, 00:17:17.800 "data_offset": 256, 00:17:17.800 "data_size": 7936 00:17:17.800 }, 00:17:17.800 { 00:17:17.800 "name": "BaseBdev2", 00:17:17.800 "uuid": "7ae7704c-fe5a-54ef-b4a2-dd1bae070ef2", 00:17:17.800 "is_configured": true, 00:17:17.800 "data_offset": 256, 00:17:17.800 "data_size": 7936 00:17:17.800 } 00:17:17.800 ] 00:17:17.800 }' 00:17:17.800 09:54:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.800 09:54:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:17.800 09:54:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.800 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.800 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:17.800 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.800 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.800 [2024-12-06 09:54:43.047673] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:18.060 [2024-12-06 09:54:43.091521] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:18.060 [2024-12-06 09:54:43.091575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.060 [2024-12-06 09:54:43.091593] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:18.060 [2024-12-06 09:54:43.091600] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:18.060 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.060 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:18.060 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.060 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.060 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:18.060 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:18.060 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:18.060 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.060 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.060 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.060 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.060 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.060 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.060 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.060 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.060 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.060 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.060 "name": "raid_bdev1", 00:17:18.060 "uuid": "ac974092-e528-4723-98b6-f2c4b2d264e4", 00:17:18.060 "strip_size_kb": 0, 00:17:18.060 "state": "online", 00:17:18.060 "raid_level": "raid1", 00:17:18.060 "superblock": true, 00:17:18.060 "num_base_bdevs": 2, 00:17:18.060 "num_base_bdevs_discovered": 1, 00:17:18.060 "num_base_bdevs_operational": 1, 00:17:18.060 "base_bdevs_list": [ 00:17:18.060 { 00:17:18.060 "name": null, 00:17:18.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.060 "is_configured": false, 00:17:18.060 "data_offset": 0, 00:17:18.060 "data_size": 7936 00:17:18.060 }, 00:17:18.060 { 00:17:18.060 "name": "BaseBdev2", 00:17:18.060 "uuid": "7ae7704c-fe5a-54ef-b4a2-dd1bae070ef2", 00:17:18.060 "is_configured": true, 00:17:18.060 "data_offset": 256, 00:17:18.060 "data_size": 7936 00:17:18.060 } 00:17:18.060 ] 00:17:18.060 }' 00:17:18.060 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.060 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.320 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:18.320 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.320 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:18.320 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:18.320 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.320 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.320 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.320 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.320 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.580 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.580 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.580 "name": "raid_bdev1", 00:17:18.580 "uuid": "ac974092-e528-4723-98b6-f2c4b2d264e4", 00:17:18.580 "strip_size_kb": 0, 00:17:18.580 "state": "online", 00:17:18.580 "raid_level": "raid1", 00:17:18.580 "superblock": true, 00:17:18.580 "num_base_bdevs": 2, 00:17:18.580 "num_base_bdevs_discovered": 1, 00:17:18.580 "num_base_bdevs_operational": 1, 00:17:18.580 "base_bdevs_list": [ 00:17:18.580 { 00:17:18.580 "name": null, 00:17:18.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.580 "is_configured": false, 00:17:18.580 "data_offset": 0, 00:17:18.580 "data_size": 7936 00:17:18.580 }, 00:17:18.580 { 00:17:18.580 "name": "BaseBdev2", 00:17:18.580 "uuid": "7ae7704c-fe5a-54ef-b4a2-dd1bae070ef2", 00:17:18.580 "is_configured": true, 00:17:18.580 "data_offset": 256, 00:17:18.580 "data_size": 7936 00:17:18.580 } 00:17:18.580 ] 00:17:18.580 }' 00:17:18.580 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.580 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:18.580 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.580 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:18.580 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:18.580 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.580 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.580 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.580 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:18.580 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.580 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.580 [2024-12-06 09:54:43.730008] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:18.580 [2024-12-06 09:54:43.730067] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.580 [2024-12-06 09:54:43.730098] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:18.580 [2024-12-06 09:54:43.730119] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.580 [2024-12-06 09:54:43.730630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.580 [2024-12-06 09:54:43.730648] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:18.580 [2024-12-06 09:54:43.730727] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:18.581 [2024-12-06 09:54:43.730740] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:18.581 [2024-12-06 09:54:43.730754] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:18.581 [2024-12-06 09:54:43.730765] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:18.581 BaseBdev1 00:17:18.581 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.581 09:54:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:19.555 09:54:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:19.555 09:54:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.555 09:54:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.555 09:54:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.555 09:54:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.555 09:54:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:19.555 09:54:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.555 09:54:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.555 09:54:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.555 09:54:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.555 09:54:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.555 09:54:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.555 09:54:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.555 09:54:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.555 09:54:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.555 09:54:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.555 "name": "raid_bdev1", 00:17:19.555 "uuid": "ac974092-e528-4723-98b6-f2c4b2d264e4", 00:17:19.555 "strip_size_kb": 0, 00:17:19.555 "state": "online", 00:17:19.555 "raid_level": "raid1", 00:17:19.555 "superblock": true, 00:17:19.555 "num_base_bdevs": 2, 00:17:19.555 "num_base_bdevs_discovered": 1, 00:17:19.555 "num_base_bdevs_operational": 1, 00:17:19.555 "base_bdevs_list": [ 00:17:19.555 { 00:17:19.555 "name": null, 00:17:19.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.555 "is_configured": false, 00:17:19.555 "data_offset": 0, 00:17:19.555 "data_size": 7936 00:17:19.555 }, 00:17:19.555 { 00:17:19.555 "name": "BaseBdev2", 00:17:19.555 "uuid": "7ae7704c-fe5a-54ef-b4a2-dd1bae070ef2", 00:17:19.555 "is_configured": true, 00:17:19.555 "data_offset": 256, 00:17:19.555 "data_size": 7936 00:17:19.555 } 00:17:19.555 ] 00:17:19.555 }' 00:17:19.555 09:54:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.555 09:54:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.124 09:54:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:20.124 09:54:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.124 09:54:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:20.125 09:54:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:20.125 09:54:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.125 09:54:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.125 09:54:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.125 09:54:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.125 09:54:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.125 09:54:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.125 09:54:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.125 "name": "raid_bdev1", 00:17:20.125 "uuid": "ac974092-e528-4723-98b6-f2c4b2d264e4", 00:17:20.125 "strip_size_kb": 0, 00:17:20.125 "state": "online", 00:17:20.125 "raid_level": "raid1", 00:17:20.125 "superblock": true, 00:17:20.125 "num_base_bdevs": 2, 00:17:20.125 "num_base_bdevs_discovered": 1, 00:17:20.125 "num_base_bdevs_operational": 1, 00:17:20.125 "base_bdevs_list": [ 00:17:20.125 { 00:17:20.125 "name": null, 00:17:20.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.125 "is_configured": false, 00:17:20.125 "data_offset": 0, 00:17:20.125 "data_size": 7936 00:17:20.125 }, 00:17:20.125 { 00:17:20.125 "name": "BaseBdev2", 00:17:20.125 "uuid": "7ae7704c-fe5a-54ef-b4a2-dd1bae070ef2", 00:17:20.125 "is_configured": true, 00:17:20.125 "data_offset": 256, 00:17:20.125 "data_size": 7936 00:17:20.125 } 00:17:20.125 ] 00:17:20.125 }' 00:17:20.125 09:54:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.125 09:54:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:20.125 09:54:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.125 09:54:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:20.125 09:54:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:20.125 09:54:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:20.125 09:54:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:20.125 09:54:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:20.125 09:54:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.125 09:54:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:20.125 09:54:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.125 09:54:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:20.125 09:54:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.125 09:54:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.125 [2024-12-06 09:54:45.359506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:20.125 [2024-12-06 09:54:45.359757] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:20.125 [2024-12-06 09:54:45.359817] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:20.125 request: 00:17:20.125 { 00:17:20.125 "base_bdev": "BaseBdev1", 00:17:20.125 "raid_bdev": "raid_bdev1", 00:17:20.125 "method": "bdev_raid_add_base_bdev", 00:17:20.125 "req_id": 1 00:17:20.125 } 00:17:20.125 Got JSON-RPC error response 00:17:20.125 response: 00:17:20.125 { 00:17:20.125 "code": -22, 00:17:20.125 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:20.125 } 00:17:20.125 09:54:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:20.125 09:54:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:17:20.125 09:54:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:20.125 09:54:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:20.125 09:54:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:20.125 09:54:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:21.505 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:21.505 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.505 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.505 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:21.505 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:21.505 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:21.505 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.505 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.505 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.505 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.505 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.505 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.505 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.505 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.505 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.505 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.505 "name": "raid_bdev1", 00:17:21.505 "uuid": "ac974092-e528-4723-98b6-f2c4b2d264e4", 00:17:21.505 "strip_size_kb": 0, 00:17:21.505 "state": "online", 00:17:21.505 "raid_level": "raid1", 00:17:21.505 "superblock": true, 00:17:21.505 "num_base_bdevs": 2, 00:17:21.505 "num_base_bdevs_discovered": 1, 00:17:21.505 "num_base_bdevs_operational": 1, 00:17:21.505 "base_bdevs_list": [ 00:17:21.505 { 00:17:21.505 "name": null, 00:17:21.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.505 "is_configured": false, 00:17:21.505 "data_offset": 0, 00:17:21.505 "data_size": 7936 00:17:21.505 }, 00:17:21.505 { 00:17:21.505 "name": "BaseBdev2", 00:17:21.505 "uuid": "7ae7704c-fe5a-54ef-b4a2-dd1bae070ef2", 00:17:21.505 "is_configured": true, 00:17:21.505 "data_offset": 256, 00:17:21.505 "data_size": 7936 00:17:21.505 } 00:17:21.505 ] 00:17:21.505 }' 00:17:21.505 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.505 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.766 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:21.766 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.766 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:21.766 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:21.766 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.766 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.766 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.766 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.766 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.766 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.766 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.766 "name": "raid_bdev1", 00:17:21.766 "uuid": "ac974092-e528-4723-98b6-f2c4b2d264e4", 00:17:21.766 "strip_size_kb": 0, 00:17:21.766 "state": "online", 00:17:21.766 "raid_level": "raid1", 00:17:21.766 "superblock": true, 00:17:21.766 "num_base_bdevs": 2, 00:17:21.766 "num_base_bdevs_discovered": 1, 00:17:21.766 "num_base_bdevs_operational": 1, 00:17:21.766 "base_bdevs_list": [ 00:17:21.766 { 00:17:21.766 "name": null, 00:17:21.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.766 "is_configured": false, 00:17:21.766 "data_offset": 0, 00:17:21.766 "data_size": 7936 00:17:21.766 }, 00:17:21.766 { 00:17:21.766 "name": "BaseBdev2", 00:17:21.766 "uuid": "7ae7704c-fe5a-54ef-b4a2-dd1bae070ef2", 00:17:21.766 "is_configured": true, 00:17:21.766 "data_offset": 256, 00:17:21.766 "data_size": 7936 00:17:21.766 } 00:17:21.766 ] 00:17:21.766 }' 00:17:21.766 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.766 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:21.766 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.766 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:21.766 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86379 00:17:21.766 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86379 ']' 00:17:21.766 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86379 00:17:21.766 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:21.766 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:21.766 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86379 00:17:21.766 killing process with pid 86379 00:17:21.766 Received shutdown signal, test time was about 60.000000 seconds 00:17:21.766 00:17:21.766 Latency(us) 00:17:21.766 [2024-12-06T09:54:47.039Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.766 [2024-12-06T09:54:47.039Z] =================================================================================================================== 00:17:21.766 [2024-12-06T09:54:47.039Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:21.766 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:21.766 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:21.766 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86379' 00:17:21.766 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86379 00:17:21.766 [2024-12-06 09:54:46.995505] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:21.766 [2024-12-06 09:54:46.995621] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:21.766 09:54:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86379 00:17:21.766 [2024-12-06 09:54:46.995670] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:21.766 [2024-12-06 09:54:46.995682] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:22.335 [2024-12-06 09:54:47.309962] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:23.276 09:54:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:17:23.276 00:17:23.276 real 0m19.987s 00:17:23.276 user 0m25.941s 00:17:23.276 sys 0m2.781s 00:17:23.276 ************************************ 00:17:23.276 END TEST raid_rebuild_test_sb_4k 00:17:23.276 ************************************ 00:17:23.276 09:54:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:23.276 09:54:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.276 09:54:48 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:17:23.276 09:54:48 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:17:23.276 09:54:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:23.276 09:54:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:23.276 09:54:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:23.536 ************************************ 00:17:23.536 START TEST raid_state_function_test_sb_md_separate 00:17:23.536 ************************************ 00:17:23.536 09:54:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:23.536 09:54:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:23.536 09:54:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:23.536 09:54:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:23.536 09:54:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:23.536 09:54:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:23.536 09:54:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:23.536 09:54:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:23.536 09:54:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:23.536 09:54:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:23.536 09:54:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:23.536 09:54:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:23.536 09:54:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:23.536 09:54:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:23.537 09:54:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:23.537 09:54:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:23.537 09:54:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:23.537 09:54:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:23.537 09:54:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:23.537 09:54:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:23.537 09:54:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:23.537 09:54:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:23.537 09:54:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:23.537 Process raid pid: 87072 00:17:23.537 09:54:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87072 00:17:23.537 09:54:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:23.537 09:54:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87072' 00:17:23.537 09:54:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87072 00:17:23.537 09:54:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87072 ']' 00:17:23.537 09:54:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.537 09:54:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:23.537 09:54:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.537 09:54:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:23.537 09:54:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.537 [2024-12-06 09:54:48.653222] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:17:23.537 [2024-12-06 09:54:48.653444] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:23.796 [2024-12-06 09:54:48.830162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.796 [2024-12-06 09:54:48.961943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.056 [2024-12-06 09:54:49.201667] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:24.056 [2024-12-06 09:54:49.201708] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:24.317 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:24.317 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:24.317 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:24.317 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.317 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.317 [2024-12-06 09:54:49.478987] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:24.317 [2024-12-06 09:54:49.479050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:24.317 [2024-12-06 09:54:49.479060] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:24.317 [2024-12-06 09:54:49.479069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:24.317 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.317 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:24.317 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:24.317 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:24.317 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:24.317 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:24.317 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:24.317 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.317 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.317 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.317 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.317 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.317 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.317 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.317 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.317 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.317 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.317 "name": "Existed_Raid", 00:17:24.317 "uuid": "0448d071-d2cb-4073-8744-eb07779a034e", 00:17:24.317 "strip_size_kb": 0, 00:17:24.317 "state": "configuring", 00:17:24.317 "raid_level": "raid1", 00:17:24.317 "superblock": true, 00:17:24.317 "num_base_bdevs": 2, 00:17:24.317 "num_base_bdevs_discovered": 0, 00:17:24.317 "num_base_bdevs_operational": 2, 00:17:24.317 "base_bdevs_list": [ 00:17:24.317 { 00:17:24.317 "name": "BaseBdev1", 00:17:24.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.317 "is_configured": false, 00:17:24.317 "data_offset": 0, 00:17:24.317 "data_size": 0 00:17:24.317 }, 00:17:24.317 { 00:17:24.317 "name": "BaseBdev2", 00:17:24.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.317 "is_configured": false, 00:17:24.317 "data_offset": 0, 00:17:24.317 "data_size": 0 00:17:24.317 } 00:17:24.317 ] 00:17:24.317 }' 00:17:24.317 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.317 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.888 [2024-12-06 09:54:49.878219] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:24.888 [2024-12-06 09:54:49.878302] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.888 [2024-12-06 09:54:49.890211] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:24.888 [2024-12-06 09:54:49.890280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:24.888 [2024-12-06 09:54:49.890305] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:24.888 [2024-12-06 09:54:49.890332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.888 [2024-12-06 09:54:49.944590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:24.888 BaseBdev1 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.888 [ 00:17:24.888 { 00:17:24.888 "name": "BaseBdev1", 00:17:24.888 "aliases": [ 00:17:24.888 "e5f9f525-54bc-4e1b-a5da-4b7526a03579" 00:17:24.888 ], 00:17:24.888 "product_name": "Malloc disk", 00:17:24.888 "block_size": 4096, 00:17:24.888 "num_blocks": 8192, 00:17:24.888 "uuid": "e5f9f525-54bc-4e1b-a5da-4b7526a03579", 00:17:24.888 "md_size": 32, 00:17:24.888 "md_interleave": false, 00:17:24.888 "dif_type": 0, 00:17:24.888 "assigned_rate_limits": { 00:17:24.888 "rw_ios_per_sec": 0, 00:17:24.888 "rw_mbytes_per_sec": 0, 00:17:24.888 "r_mbytes_per_sec": 0, 00:17:24.888 "w_mbytes_per_sec": 0 00:17:24.888 }, 00:17:24.888 "claimed": true, 00:17:24.888 "claim_type": "exclusive_write", 00:17:24.888 "zoned": false, 00:17:24.888 "supported_io_types": { 00:17:24.888 "read": true, 00:17:24.888 "write": true, 00:17:24.888 "unmap": true, 00:17:24.888 "flush": true, 00:17:24.888 "reset": true, 00:17:24.888 "nvme_admin": false, 00:17:24.888 "nvme_io": false, 00:17:24.888 "nvme_io_md": false, 00:17:24.888 "write_zeroes": true, 00:17:24.888 "zcopy": true, 00:17:24.888 "get_zone_info": false, 00:17:24.888 "zone_management": false, 00:17:24.888 "zone_append": false, 00:17:24.888 "compare": false, 00:17:24.888 "compare_and_write": false, 00:17:24.888 "abort": true, 00:17:24.888 "seek_hole": false, 00:17:24.888 "seek_data": false, 00:17:24.888 "copy": true, 00:17:24.888 "nvme_iov_md": false 00:17:24.888 }, 00:17:24.888 "memory_domains": [ 00:17:24.888 { 00:17:24.888 "dma_device_id": "system", 00:17:24.888 "dma_device_type": 1 00:17:24.888 }, 00:17:24.888 { 00:17:24.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.888 "dma_device_type": 2 00:17:24.888 } 00:17:24.888 ], 00:17:24.888 "driver_specific": {} 00:17:24.888 } 00:17:24.888 ] 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.888 09:54:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.888 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.888 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.888 "name": "Existed_Raid", 00:17:24.888 "uuid": "34dd43da-699e-4c6e-b75b-feff5ff3fe45", 00:17:24.888 "strip_size_kb": 0, 00:17:24.888 "state": "configuring", 00:17:24.888 "raid_level": "raid1", 00:17:24.888 "superblock": true, 00:17:24.888 "num_base_bdevs": 2, 00:17:24.888 "num_base_bdevs_discovered": 1, 00:17:24.888 "num_base_bdevs_operational": 2, 00:17:24.888 "base_bdevs_list": [ 00:17:24.888 { 00:17:24.888 "name": "BaseBdev1", 00:17:24.888 "uuid": "e5f9f525-54bc-4e1b-a5da-4b7526a03579", 00:17:24.888 "is_configured": true, 00:17:24.888 "data_offset": 256, 00:17:24.888 "data_size": 7936 00:17:24.888 }, 00:17:24.888 { 00:17:24.888 "name": "BaseBdev2", 00:17:24.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.889 "is_configured": false, 00:17:24.889 "data_offset": 0, 00:17:24.889 "data_size": 0 00:17:24.889 } 00:17:24.889 ] 00:17:24.889 }' 00:17:24.889 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.889 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.460 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:25.460 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.460 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.460 [2024-12-06 09:54:50.463777] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:25.460 [2024-12-06 09:54:50.463825] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:25.460 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.460 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:25.460 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.460 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.460 [2024-12-06 09:54:50.471805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:25.460 [2024-12-06 09:54:50.473833] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:25.460 [2024-12-06 09:54:50.473934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:25.460 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.460 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:25.460 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:25.460 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:25.460 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:25.460 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:25.460 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.460 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.460 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:25.460 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.460 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.460 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.460 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.460 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.460 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.460 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.460 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.460 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.460 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.460 "name": "Existed_Raid", 00:17:25.460 "uuid": "696c69df-c28b-49a1-a6b4-f17795c6a0d7", 00:17:25.460 "strip_size_kb": 0, 00:17:25.460 "state": "configuring", 00:17:25.460 "raid_level": "raid1", 00:17:25.460 "superblock": true, 00:17:25.460 "num_base_bdevs": 2, 00:17:25.460 "num_base_bdevs_discovered": 1, 00:17:25.460 "num_base_bdevs_operational": 2, 00:17:25.460 "base_bdevs_list": [ 00:17:25.460 { 00:17:25.460 "name": "BaseBdev1", 00:17:25.460 "uuid": "e5f9f525-54bc-4e1b-a5da-4b7526a03579", 00:17:25.460 "is_configured": true, 00:17:25.460 "data_offset": 256, 00:17:25.460 "data_size": 7936 00:17:25.460 }, 00:17:25.460 { 00:17:25.460 "name": "BaseBdev2", 00:17:25.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.461 "is_configured": false, 00:17:25.461 "data_offset": 0, 00:17:25.461 "data_size": 0 00:17:25.461 } 00:17:25.461 ] 00:17:25.461 }' 00:17:25.461 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.461 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.721 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:17:25.721 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.721 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.721 [2024-12-06 09:54:50.971029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:25.721 [2024-12-06 09:54:50.971437] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:25.721 [2024-12-06 09:54:50.971493] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:25.721 [2024-12-06 09:54:50.971610] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:25.721 [2024-12-06 09:54:50.971820] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:25.721 [2024-12-06 09:54:50.971866] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:25.721 [2024-12-06 09:54:50.972012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.721 BaseBdev2 00:17:25.721 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.721 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:25.721 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:25.721 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:25.721 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:25.721 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:25.721 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:25.721 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:25.721 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.721 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.721 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.721 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:25.721 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.721 09:54:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.980 [ 00:17:25.980 { 00:17:25.980 "name": "BaseBdev2", 00:17:25.980 "aliases": [ 00:17:25.980 "2bec4dfd-5767-4bca-9d14-34321b912caf" 00:17:25.980 ], 00:17:25.980 "product_name": "Malloc disk", 00:17:25.980 "block_size": 4096, 00:17:25.980 "num_blocks": 8192, 00:17:25.980 "uuid": "2bec4dfd-5767-4bca-9d14-34321b912caf", 00:17:25.980 "md_size": 32, 00:17:25.980 "md_interleave": false, 00:17:25.980 "dif_type": 0, 00:17:25.980 "assigned_rate_limits": { 00:17:25.980 "rw_ios_per_sec": 0, 00:17:25.980 "rw_mbytes_per_sec": 0, 00:17:25.980 "r_mbytes_per_sec": 0, 00:17:25.980 "w_mbytes_per_sec": 0 00:17:25.980 }, 00:17:25.980 "claimed": true, 00:17:25.980 "claim_type": "exclusive_write", 00:17:25.980 "zoned": false, 00:17:25.980 "supported_io_types": { 00:17:25.980 "read": true, 00:17:25.980 "write": true, 00:17:25.980 "unmap": true, 00:17:25.980 "flush": true, 00:17:25.980 "reset": true, 00:17:25.980 "nvme_admin": false, 00:17:25.980 "nvme_io": false, 00:17:25.980 "nvme_io_md": false, 00:17:25.980 "write_zeroes": true, 00:17:25.980 "zcopy": true, 00:17:25.980 "get_zone_info": false, 00:17:25.980 "zone_management": false, 00:17:25.980 "zone_append": false, 00:17:25.980 "compare": false, 00:17:25.980 "compare_and_write": false, 00:17:25.980 "abort": true, 00:17:25.980 "seek_hole": false, 00:17:25.980 "seek_data": false, 00:17:25.980 "copy": true, 00:17:25.980 "nvme_iov_md": false 00:17:25.980 }, 00:17:25.980 "memory_domains": [ 00:17:25.980 { 00:17:25.980 "dma_device_id": "system", 00:17:25.980 "dma_device_type": 1 00:17:25.980 }, 00:17:25.980 { 00:17:25.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.980 "dma_device_type": 2 00:17:25.980 } 00:17:25.980 ], 00:17:25.980 "driver_specific": {} 00:17:25.980 } 00:17:25.980 ] 00:17:25.980 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.980 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:25.980 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:25.980 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:25.980 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:25.980 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:25.980 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.980 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.980 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.980 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:25.980 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.980 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.980 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.980 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.980 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.980 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.980 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.980 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.980 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.980 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.980 "name": "Existed_Raid", 00:17:25.980 "uuid": "696c69df-c28b-49a1-a6b4-f17795c6a0d7", 00:17:25.980 "strip_size_kb": 0, 00:17:25.980 "state": "online", 00:17:25.980 "raid_level": "raid1", 00:17:25.980 "superblock": true, 00:17:25.980 "num_base_bdevs": 2, 00:17:25.980 "num_base_bdevs_discovered": 2, 00:17:25.980 "num_base_bdevs_operational": 2, 00:17:25.980 "base_bdevs_list": [ 00:17:25.980 { 00:17:25.980 "name": "BaseBdev1", 00:17:25.980 "uuid": "e5f9f525-54bc-4e1b-a5da-4b7526a03579", 00:17:25.981 "is_configured": true, 00:17:25.981 "data_offset": 256, 00:17:25.981 "data_size": 7936 00:17:25.981 }, 00:17:25.981 { 00:17:25.981 "name": "BaseBdev2", 00:17:25.981 "uuid": "2bec4dfd-5767-4bca-9d14-34321b912caf", 00:17:25.981 "is_configured": true, 00:17:25.981 "data_offset": 256, 00:17:25.981 "data_size": 7936 00:17:25.981 } 00:17:25.981 ] 00:17:25.981 }' 00:17:25.981 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.981 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.239 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:26.239 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:26.239 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:26.239 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:26.239 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:26.239 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:26.239 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:26.239 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:26.239 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.239 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.239 [2024-12-06 09:54:51.430554] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:26.239 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.239 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:26.239 "name": "Existed_Raid", 00:17:26.239 "aliases": [ 00:17:26.239 "696c69df-c28b-49a1-a6b4-f17795c6a0d7" 00:17:26.239 ], 00:17:26.239 "product_name": "Raid Volume", 00:17:26.239 "block_size": 4096, 00:17:26.239 "num_blocks": 7936, 00:17:26.239 "uuid": "696c69df-c28b-49a1-a6b4-f17795c6a0d7", 00:17:26.239 "md_size": 32, 00:17:26.239 "md_interleave": false, 00:17:26.239 "dif_type": 0, 00:17:26.239 "assigned_rate_limits": { 00:17:26.239 "rw_ios_per_sec": 0, 00:17:26.239 "rw_mbytes_per_sec": 0, 00:17:26.239 "r_mbytes_per_sec": 0, 00:17:26.239 "w_mbytes_per_sec": 0 00:17:26.239 }, 00:17:26.239 "claimed": false, 00:17:26.239 "zoned": false, 00:17:26.239 "supported_io_types": { 00:17:26.239 "read": true, 00:17:26.239 "write": true, 00:17:26.239 "unmap": false, 00:17:26.239 "flush": false, 00:17:26.239 "reset": true, 00:17:26.239 "nvme_admin": false, 00:17:26.239 "nvme_io": false, 00:17:26.239 "nvme_io_md": false, 00:17:26.239 "write_zeroes": true, 00:17:26.239 "zcopy": false, 00:17:26.239 "get_zone_info": false, 00:17:26.239 "zone_management": false, 00:17:26.239 "zone_append": false, 00:17:26.239 "compare": false, 00:17:26.239 "compare_and_write": false, 00:17:26.239 "abort": false, 00:17:26.239 "seek_hole": false, 00:17:26.239 "seek_data": false, 00:17:26.239 "copy": false, 00:17:26.239 "nvme_iov_md": false 00:17:26.239 }, 00:17:26.239 "memory_domains": [ 00:17:26.239 { 00:17:26.239 "dma_device_id": "system", 00:17:26.239 "dma_device_type": 1 00:17:26.239 }, 00:17:26.239 { 00:17:26.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.239 "dma_device_type": 2 00:17:26.239 }, 00:17:26.239 { 00:17:26.239 "dma_device_id": "system", 00:17:26.239 "dma_device_type": 1 00:17:26.239 }, 00:17:26.239 { 00:17:26.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.239 "dma_device_type": 2 00:17:26.239 } 00:17:26.239 ], 00:17:26.239 "driver_specific": { 00:17:26.239 "raid": { 00:17:26.239 "uuid": "696c69df-c28b-49a1-a6b4-f17795c6a0d7", 00:17:26.239 "strip_size_kb": 0, 00:17:26.239 "state": "online", 00:17:26.239 "raid_level": "raid1", 00:17:26.239 "superblock": true, 00:17:26.239 "num_base_bdevs": 2, 00:17:26.239 "num_base_bdevs_discovered": 2, 00:17:26.239 "num_base_bdevs_operational": 2, 00:17:26.239 "base_bdevs_list": [ 00:17:26.239 { 00:17:26.239 "name": "BaseBdev1", 00:17:26.239 "uuid": "e5f9f525-54bc-4e1b-a5da-4b7526a03579", 00:17:26.239 "is_configured": true, 00:17:26.239 "data_offset": 256, 00:17:26.239 "data_size": 7936 00:17:26.239 }, 00:17:26.239 { 00:17:26.239 "name": "BaseBdev2", 00:17:26.239 "uuid": "2bec4dfd-5767-4bca-9d14-34321b912caf", 00:17:26.239 "is_configured": true, 00:17:26.239 "data_offset": 256, 00:17:26.239 "data_size": 7936 00:17:26.239 } 00:17:26.239 ] 00:17:26.239 } 00:17:26.239 } 00:17:26.239 }' 00:17:26.239 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:26.497 BaseBdev2' 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.497 [2024-12-06 09:54:51.641951] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.497 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.756 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.756 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.756 "name": "Existed_Raid", 00:17:26.756 "uuid": "696c69df-c28b-49a1-a6b4-f17795c6a0d7", 00:17:26.756 "strip_size_kb": 0, 00:17:26.756 "state": "online", 00:17:26.756 "raid_level": "raid1", 00:17:26.756 "superblock": true, 00:17:26.756 "num_base_bdevs": 2, 00:17:26.756 "num_base_bdevs_discovered": 1, 00:17:26.756 "num_base_bdevs_operational": 1, 00:17:26.756 "base_bdevs_list": [ 00:17:26.756 { 00:17:26.756 "name": null, 00:17:26.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.756 "is_configured": false, 00:17:26.756 "data_offset": 0, 00:17:26.756 "data_size": 7936 00:17:26.756 }, 00:17:26.756 { 00:17:26.756 "name": "BaseBdev2", 00:17:26.756 "uuid": "2bec4dfd-5767-4bca-9d14-34321b912caf", 00:17:26.756 "is_configured": true, 00:17:26.756 "data_offset": 256, 00:17:26.756 "data_size": 7936 00:17:26.756 } 00:17:26.756 ] 00:17:26.756 }' 00:17:26.756 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.756 09:54:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.015 09:54:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:27.015 09:54:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:27.015 09:54:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:27.015 09:54:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.015 09:54:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.015 09:54:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.015 09:54:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.015 09:54:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:27.015 09:54:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:27.015 09:54:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:27.015 09:54:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.015 09:54:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.015 [2024-12-06 09:54:52.220602] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:27.015 [2024-12-06 09:54:52.220726] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:27.274 [2024-12-06 09:54:52.329994] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:27.274 [2024-12-06 09:54:52.330051] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:27.274 [2024-12-06 09:54:52.330064] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:27.274 09:54:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.274 09:54:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:27.274 09:54:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:27.274 09:54:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.274 09:54:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:27.274 09:54:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.274 09:54:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.274 09:54:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.274 09:54:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:27.274 09:54:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:27.274 09:54:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:27.274 09:54:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87072 00:17:27.274 09:54:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87072 ']' 00:17:27.274 09:54:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87072 00:17:27.274 09:54:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:27.274 09:54:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:27.274 09:54:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87072 00:17:27.274 09:54:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:27.274 killing process with pid 87072 00:17:27.274 09:54:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:27.274 09:54:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87072' 00:17:27.274 09:54:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87072 00:17:27.274 [2024-12-06 09:54:52.430182] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:27.274 09:54:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87072 00:17:27.274 [2024-12-06 09:54:52.447184] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:28.655 09:54:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:17:28.655 00:17:28.655 real 0m5.061s 00:17:28.655 user 0m7.066s 00:17:28.655 sys 0m0.958s 00:17:28.655 09:54:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:28.655 09:54:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.655 ************************************ 00:17:28.655 END TEST raid_state_function_test_sb_md_separate 00:17:28.655 ************************************ 00:17:28.655 09:54:53 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:17:28.655 09:54:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:28.655 09:54:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:28.655 09:54:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:28.655 ************************************ 00:17:28.655 START TEST raid_superblock_test_md_separate 00:17:28.655 ************************************ 00:17:28.655 09:54:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:28.655 09:54:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:28.655 09:54:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:28.655 09:54:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:28.655 09:54:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:28.655 09:54:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:28.655 09:54:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:28.655 09:54:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:28.655 09:54:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:28.655 09:54:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:28.655 09:54:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:28.655 09:54:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:28.655 09:54:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:28.655 09:54:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:28.655 09:54:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:28.655 09:54:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:28.655 09:54:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87324 00:17:28.655 09:54:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87324 00:17:28.655 09:54:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:28.655 09:54:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87324 ']' 00:17:28.655 09:54:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.655 09:54:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:28.655 09:54:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.655 09:54:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:28.655 09:54:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.655 [2024-12-06 09:54:53.780083] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:17:28.655 [2024-12-06 09:54:53.780321] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87324 ] 00:17:28.914 [2024-12-06 09:54:53.958427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.914 [2024-12-06 09:54:54.095226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.173 [2024-12-06 09:54:54.325216] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:29.173 [2024-12-06 09:54:54.325374] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:29.432 09:54:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:29.432 09:54:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:29.432 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:29.432 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:29.432 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:29.432 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:29.432 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:29.432 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:29.432 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:29.432 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:29.432 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:17:29.432 09:54:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.432 09:54:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.432 malloc1 00:17:29.432 09:54:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.432 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:29.432 09:54:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.432 09:54:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.432 [2024-12-06 09:54:54.653005] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:29.432 [2024-12-06 09:54:54.653174] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:29.432 [2024-12-06 09:54:54.653217] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:29.432 [2024-12-06 09:54:54.653247] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:29.432 [2024-12-06 09:54:54.655413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:29.432 [2024-12-06 09:54:54.655479] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:29.432 pt1 00:17:29.432 09:54:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.432 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:29.432 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:29.432 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:29.432 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:29.432 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:29.433 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:29.433 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:29.433 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:29.433 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:17:29.433 09:54:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.433 09:54:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.692 malloc2 00:17:29.692 09:54:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.692 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:29.692 09:54:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.692 09:54:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.692 [2024-12-06 09:54:54.719212] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:29.692 [2024-12-06 09:54:54.719317] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:29.692 [2024-12-06 09:54:54.719365] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:29.692 [2024-12-06 09:54:54.719375] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:29.692 [2024-12-06 09:54:54.721515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:29.692 [2024-12-06 09:54:54.721579] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:29.692 pt2 00:17:29.692 09:54:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.692 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:29.692 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:29.692 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:29.692 09:54:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.692 09:54:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.692 [2024-12-06 09:54:54.731204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:29.692 [2024-12-06 09:54:54.733288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:29.692 [2024-12-06 09:54:54.733468] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:29.692 [2024-12-06 09:54:54.733483] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:29.692 [2024-12-06 09:54:54.733556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:29.692 [2024-12-06 09:54:54.733680] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:29.692 [2024-12-06 09:54:54.733691] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:29.692 [2024-12-06 09:54:54.733790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:29.692 09:54:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.692 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:29.692 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:29.692 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:29.692 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:29.692 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:29.692 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:29.692 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.692 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.692 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.692 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.692 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.692 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.692 09:54:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.692 09:54:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.692 09:54:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.692 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.692 "name": "raid_bdev1", 00:17:29.692 "uuid": "310ab57d-b4db-4c03-9919-e319301158c7", 00:17:29.692 "strip_size_kb": 0, 00:17:29.692 "state": "online", 00:17:29.692 "raid_level": "raid1", 00:17:29.692 "superblock": true, 00:17:29.692 "num_base_bdevs": 2, 00:17:29.692 "num_base_bdevs_discovered": 2, 00:17:29.692 "num_base_bdevs_operational": 2, 00:17:29.692 "base_bdevs_list": [ 00:17:29.692 { 00:17:29.692 "name": "pt1", 00:17:29.692 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:29.692 "is_configured": true, 00:17:29.692 "data_offset": 256, 00:17:29.692 "data_size": 7936 00:17:29.692 }, 00:17:29.692 { 00:17:29.692 "name": "pt2", 00:17:29.692 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:29.692 "is_configured": true, 00:17:29.692 "data_offset": 256, 00:17:29.692 "data_size": 7936 00:17:29.692 } 00:17:29.692 ] 00:17:29.692 }' 00:17:29.692 09:54:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.692 09:54:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.951 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:29.951 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:29.951 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:29.951 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:29.951 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:29.951 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:29.951 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:29.951 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:29.951 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.951 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.951 [2024-12-06 09:54:55.186635] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:29.951 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.209 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:30.209 "name": "raid_bdev1", 00:17:30.209 "aliases": [ 00:17:30.209 "310ab57d-b4db-4c03-9919-e319301158c7" 00:17:30.209 ], 00:17:30.209 "product_name": "Raid Volume", 00:17:30.209 "block_size": 4096, 00:17:30.209 "num_blocks": 7936, 00:17:30.209 "uuid": "310ab57d-b4db-4c03-9919-e319301158c7", 00:17:30.209 "md_size": 32, 00:17:30.209 "md_interleave": false, 00:17:30.209 "dif_type": 0, 00:17:30.209 "assigned_rate_limits": { 00:17:30.209 "rw_ios_per_sec": 0, 00:17:30.209 "rw_mbytes_per_sec": 0, 00:17:30.209 "r_mbytes_per_sec": 0, 00:17:30.209 "w_mbytes_per_sec": 0 00:17:30.209 }, 00:17:30.209 "claimed": false, 00:17:30.209 "zoned": false, 00:17:30.209 "supported_io_types": { 00:17:30.209 "read": true, 00:17:30.209 "write": true, 00:17:30.209 "unmap": false, 00:17:30.209 "flush": false, 00:17:30.209 "reset": true, 00:17:30.209 "nvme_admin": false, 00:17:30.209 "nvme_io": false, 00:17:30.209 "nvme_io_md": false, 00:17:30.209 "write_zeroes": true, 00:17:30.209 "zcopy": false, 00:17:30.209 "get_zone_info": false, 00:17:30.209 "zone_management": false, 00:17:30.209 "zone_append": false, 00:17:30.209 "compare": false, 00:17:30.209 "compare_and_write": false, 00:17:30.209 "abort": false, 00:17:30.209 "seek_hole": false, 00:17:30.209 "seek_data": false, 00:17:30.209 "copy": false, 00:17:30.209 "nvme_iov_md": false 00:17:30.209 }, 00:17:30.209 "memory_domains": [ 00:17:30.209 { 00:17:30.209 "dma_device_id": "system", 00:17:30.209 "dma_device_type": 1 00:17:30.209 }, 00:17:30.209 { 00:17:30.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.209 "dma_device_type": 2 00:17:30.209 }, 00:17:30.209 { 00:17:30.209 "dma_device_id": "system", 00:17:30.209 "dma_device_type": 1 00:17:30.209 }, 00:17:30.209 { 00:17:30.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.210 "dma_device_type": 2 00:17:30.210 } 00:17:30.210 ], 00:17:30.210 "driver_specific": { 00:17:30.210 "raid": { 00:17:30.210 "uuid": "310ab57d-b4db-4c03-9919-e319301158c7", 00:17:30.210 "strip_size_kb": 0, 00:17:30.210 "state": "online", 00:17:30.210 "raid_level": "raid1", 00:17:30.210 "superblock": true, 00:17:30.210 "num_base_bdevs": 2, 00:17:30.210 "num_base_bdevs_discovered": 2, 00:17:30.210 "num_base_bdevs_operational": 2, 00:17:30.210 "base_bdevs_list": [ 00:17:30.210 { 00:17:30.210 "name": "pt1", 00:17:30.210 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:30.210 "is_configured": true, 00:17:30.210 "data_offset": 256, 00:17:30.210 "data_size": 7936 00:17:30.210 }, 00:17:30.210 { 00:17:30.210 "name": "pt2", 00:17:30.210 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:30.210 "is_configured": true, 00:17:30.210 "data_offset": 256, 00:17:30.210 "data_size": 7936 00:17:30.210 } 00:17:30.210 ] 00:17:30.210 } 00:17:30.210 } 00:17:30.210 }' 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:30.210 pt2' 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.210 [2024-12-06 09:54:55.406189] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=310ab57d-b4db-4c03-9919-e319301158c7 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 310ab57d-b4db-4c03-9919-e319301158c7 ']' 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.210 [2024-12-06 09:54:55.457863] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:30.210 [2024-12-06 09:54:55.457923] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:30.210 [2024-12-06 09:54:55.458024] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:30.210 [2024-12-06 09:54:55.458101] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:30.210 [2024-12-06 09:54:55.458154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.210 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.469 [2024-12-06 09:54:55.585661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:30.469 [2024-12-06 09:54:55.587783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:30.469 [2024-12-06 09:54:55.587865] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:30.469 [2024-12-06 09:54:55.587926] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:30.469 [2024-12-06 09:54:55.587941] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:30.469 [2024-12-06 09:54:55.587952] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:30.469 request: 00:17:30.469 { 00:17:30.469 "name": "raid_bdev1", 00:17:30.469 "raid_level": "raid1", 00:17:30.469 "base_bdevs": [ 00:17:30.469 "malloc1", 00:17:30.469 "malloc2" 00:17:30.469 ], 00:17:30.469 "superblock": false, 00:17:30.469 "method": "bdev_raid_create", 00:17:30.469 "req_id": 1 00:17:30.469 } 00:17:30.469 Got JSON-RPC error response 00:17:30.469 response: 00:17:30.469 { 00:17:30.469 "code": -17, 00:17:30.469 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:30.469 } 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.469 [2024-12-06 09:54:55.649533] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:30.469 [2024-12-06 09:54:55.649617] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.469 [2024-12-06 09:54:55.649648] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:30.469 [2024-12-06 09:54:55.649680] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.469 [2024-12-06 09:54:55.651775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.469 [2024-12-06 09:54:55.651843] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:30.469 [2024-12-06 09:54:55.651908] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:30.469 [2024-12-06 09:54:55.652014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:30.469 pt1 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.469 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.469 "name": "raid_bdev1", 00:17:30.469 "uuid": "310ab57d-b4db-4c03-9919-e319301158c7", 00:17:30.469 "strip_size_kb": 0, 00:17:30.469 "state": "configuring", 00:17:30.469 "raid_level": "raid1", 00:17:30.469 "superblock": true, 00:17:30.469 "num_base_bdevs": 2, 00:17:30.469 "num_base_bdevs_discovered": 1, 00:17:30.469 "num_base_bdevs_operational": 2, 00:17:30.469 "base_bdevs_list": [ 00:17:30.469 { 00:17:30.469 "name": "pt1", 00:17:30.469 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:30.469 "is_configured": true, 00:17:30.469 "data_offset": 256, 00:17:30.469 "data_size": 7936 00:17:30.469 }, 00:17:30.470 { 00:17:30.470 "name": null, 00:17:30.470 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:30.470 "is_configured": false, 00:17:30.470 "data_offset": 256, 00:17:30.470 "data_size": 7936 00:17:30.470 } 00:17:30.470 ] 00:17:30.470 }' 00:17:30.470 09:54:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.470 09:54:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.036 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:31.036 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:31.036 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:31.036 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:31.036 09:54:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.036 09:54:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.036 [2024-12-06 09:54:56.120747] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:31.036 [2024-12-06 09:54:56.120833] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.036 [2024-12-06 09:54:56.120854] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:31.036 [2024-12-06 09:54:56.120866] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.036 [2024-12-06 09:54:56.121080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.036 [2024-12-06 09:54:56.121097] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:31.036 [2024-12-06 09:54:56.121141] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:31.036 [2024-12-06 09:54:56.121176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:31.036 [2024-12-06 09:54:56.121297] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:31.036 [2024-12-06 09:54:56.121308] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:31.036 [2024-12-06 09:54:56.121385] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:31.036 [2024-12-06 09:54:56.121509] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:31.036 [2024-12-06 09:54:56.121517] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:31.036 [2024-12-06 09:54:56.121619] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.036 pt2 00:17:31.036 09:54:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.036 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:31.036 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:31.036 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:31.036 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.036 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.036 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:31.036 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:31.037 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:31.037 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.037 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.037 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.037 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.037 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.037 09:54:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.037 09:54:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.037 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.037 09:54:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.037 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.037 "name": "raid_bdev1", 00:17:31.037 "uuid": "310ab57d-b4db-4c03-9919-e319301158c7", 00:17:31.037 "strip_size_kb": 0, 00:17:31.037 "state": "online", 00:17:31.037 "raid_level": "raid1", 00:17:31.037 "superblock": true, 00:17:31.037 "num_base_bdevs": 2, 00:17:31.037 "num_base_bdevs_discovered": 2, 00:17:31.037 "num_base_bdevs_operational": 2, 00:17:31.037 "base_bdevs_list": [ 00:17:31.037 { 00:17:31.037 "name": "pt1", 00:17:31.037 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:31.037 "is_configured": true, 00:17:31.037 "data_offset": 256, 00:17:31.037 "data_size": 7936 00:17:31.037 }, 00:17:31.037 { 00:17:31.037 "name": "pt2", 00:17:31.037 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:31.037 "is_configured": true, 00:17:31.037 "data_offset": 256, 00:17:31.037 "data_size": 7936 00:17:31.037 } 00:17:31.037 ] 00:17:31.037 }' 00:17:31.037 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.037 09:54:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.298 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:31.298 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:31.298 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:31.298 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:31.298 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:31.298 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:31.298 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:31.298 09:54:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.298 09:54:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.298 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:31.298 [2024-12-06 09:54:56.540320] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:31.298 09:54:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:31.568 "name": "raid_bdev1", 00:17:31.568 "aliases": [ 00:17:31.568 "310ab57d-b4db-4c03-9919-e319301158c7" 00:17:31.568 ], 00:17:31.568 "product_name": "Raid Volume", 00:17:31.568 "block_size": 4096, 00:17:31.568 "num_blocks": 7936, 00:17:31.568 "uuid": "310ab57d-b4db-4c03-9919-e319301158c7", 00:17:31.568 "md_size": 32, 00:17:31.568 "md_interleave": false, 00:17:31.568 "dif_type": 0, 00:17:31.568 "assigned_rate_limits": { 00:17:31.568 "rw_ios_per_sec": 0, 00:17:31.568 "rw_mbytes_per_sec": 0, 00:17:31.568 "r_mbytes_per_sec": 0, 00:17:31.568 "w_mbytes_per_sec": 0 00:17:31.568 }, 00:17:31.568 "claimed": false, 00:17:31.568 "zoned": false, 00:17:31.568 "supported_io_types": { 00:17:31.568 "read": true, 00:17:31.568 "write": true, 00:17:31.568 "unmap": false, 00:17:31.568 "flush": false, 00:17:31.568 "reset": true, 00:17:31.568 "nvme_admin": false, 00:17:31.568 "nvme_io": false, 00:17:31.568 "nvme_io_md": false, 00:17:31.568 "write_zeroes": true, 00:17:31.568 "zcopy": false, 00:17:31.568 "get_zone_info": false, 00:17:31.568 "zone_management": false, 00:17:31.568 "zone_append": false, 00:17:31.568 "compare": false, 00:17:31.568 "compare_and_write": false, 00:17:31.568 "abort": false, 00:17:31.568 "seek_hole": false, 00:17:31.568 "seek_data": false, 00:17:31.568 "copy": false, 00:17:31.568 "nvme_iov_md": false 00:17:31.568 }, 00:17:31.568 "memory_domains": [ 00:17:31.568 { 00:17:31.568 "dma_device_id": "system", 00:17:31.568 "dma_device_type": 1 00:17:31.568 }, 00:17:31.568 { 00:17:31.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.568 "dma_device_type": 2 00:17:31.568 }, 00:17:31.568 { 00:17:31.568 "dma_device_id": "system", 00:17:31.568 "dma_device_type": 1 00:17:31.568 }, 00:17:31.568 { 00:17:31.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.568 "dma_device_type": 2 00:17:31.568 } 00:17:31.568 ], 00:17:31.568 "driver_specific": { 00:17:31.568 "raid": { 00:17:31.568 "uuid": "310ab57d-b4db-4c03-9919-e319301158c7", 00:17:31.568 "strip_size_kb": 0, 00:17:31.568 "state": "online", 00:17:31.568 "raid_level": "raid1", 00:17:31.568 "superblock": true, 00:17:31.568 "num_base_bdevs": 2, 00:17:31.568 "num_base_bdevs_discovered": 2, 00:17:31.568 "num_base_bdevs_operational": 2, 00:17:31.568 "base_bdevs_list": [ 00:17:31.568 { 00:17:31.568 "name": "pt1", 00:17:31.568 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:31.568 "is_configured": true, 00:17:31.568 "data_offset": 256, 00:17:31.568 "data_size": 7936 00:17:31.568 }, 00:17:31.568 { 00:17:31.568 "name": "pt2", 00:17:31.568 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:31.568 "is_configured": true, 00:17:31.568 "data_offset": 256, 00:17:31.568 "data_size": 7936 00:17:31.568 } 00:17:31.568 ] 00:17:31.568 } 00:17:31.568 } 00:17:31.568 }' 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:31.568 pt2' 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.568 [2024-12-06 09:54:56.796048] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 310ab57d-b4db-4c03-9919-e319301158c7 '!=' 310ab57d-b4db-4c03-9919-e319301158c7 ']' 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.568 [2024-12-06 09:54:56.823753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.568 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.843 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.843 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.843 09:54:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.843 09:54:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.843 09:54:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.843 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.843 "name": "raid_bdev1", 00:17:31.843 "uuid": "310ab57d-b4db-4c03-9919-e319301158c7", 00:17:31.843 "strip_size_kb": 0, 00:17:31.843 "state": "online", 00:17:31.843 "raid_level": "raid1", 00:17:31.843 "superblock": true, 00:17:31.843 "num_base_bdevs": 2, 00:17:31.843 "num_base_bdevs_discovered": 1, 00:17:31.843 "num_base_bdevs_operational": 1, 00:17:31.843 "base_bdevs_list": [ 00:17:31.843 { 00:17:31.843 "name": null, 00:17:31.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.843 "is_configured": false, 00:17:31.843 "data_offset": 0, 00:17:31.843 "data_size": 7936 00:17:31.843 }, 00:17:31.843 { 00:17:31.843 "name": "pt2", 00:17:31.843 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:31.843 "is_configured": true, 00:17:31.843 "data_offset": 256, 00:17:31.843 "data_size": 7936 00:17:31.843 } 00:17:31.843 ] 00:17:31.843 }' 00:17:31.843 09:54:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.843 09:54:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.103 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:32.103 09:54:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.103 09:54:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.103 [2024-12-06 09:54:57.278963] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:32.103 [2024-12-06 09:54:57.279045] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:32.103 [2024-12-06 09:54:57.279149] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:32.103 [2024-12-06 09:54:57.279216] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:32.103 [2024-12-06 09:54:57.279266] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:32.103 09:54:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.103 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.103 09:54:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.103 09:54:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.103 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:32.103 09:54:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.103 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:32.103 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:32.103 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:32.103 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:32.103 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:32.103 09:54:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.103 09:54:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.103 09:54:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.103 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:32.103 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:32.103 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:32.103 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:32.103 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:17:32.103 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:32.103 09:54:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.103 09:54:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.103 [2024-12-06 09:54:57.350835] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:32.103 [2024-12-06 09:54:57.350930] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:32.103 [2024-12-06 09:54:57.350962] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:32.103 [2024-12-06 09:54:57.351019] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:32.103 [2024-12-06 09:54:57.353363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:32.103 [2024-12-06 09:54:57.353443] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:32.103 [2024-12-06 09:54:57.353516] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:32.103 [2024-12-06 09:54:57.353585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:32.103 [2024-12-06 09:54:57.353692] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:32.103 [2024-12-06 09:54:57.353737] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:32.103 [2024-12-06 09:54:57.353827] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:32.103 [2024-12-06 09:54:57.353968] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:32.103 [2024-12-06 09:54:57.354000] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:32.103 [2024-12-06 09:54:57.354141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.103 pt2 00:17:32.104 09:54:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.104 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:32.104 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:32.104 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.104 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:32.104 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:32.104 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:32.104 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.104 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.104 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.104 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.104 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.104 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.104 09:54:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.104 09:54:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.363 09:54:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.363 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.363 "name": "raid_bdev1", 00:17:32.363 "uuid": "310ab57d-b4db-4c03-9919-e319301158c7", 00:17:32.363 "strip_size_kb": 0, 00:17:32.363 "state": "online", 00:17:32.363 "raid_level": "raid1", 00:17:32.363 "superblock": true, 00:17:32.363 "num_base_bdevs": 2, 00:17:32.364 "num_base_bdevs_discovered": 1, 00:17:32.364 "num_base_bdevs_operational": 1, 00:17:32.364 "base_bdevs_list": [ 00:17:32.364 { 00:17:32.364 "name": null, 00:17:32.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.364 "is_configured": false, 00:17:32.364 "data_offset": 256, 00:17:32.364 "data_size": 7936 00:17:32.364 }, 00:17:32.364 { 00:17:32.364 "name": "pt2", 00:17:32.364 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:32.364 "is_configured": true, 00:17:32.364 "data_offset": 256, 00:17:32.364 "data_size": 7936 00:17:32.364 } 00:17:32.364 ] 00:17:32.364 }' 00:17:32.364 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.364 09:54:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.623 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:32.623 09:54:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.623 09:54:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.623 [2024-12-06 09:54:57.758086] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:32.624 [2024-12-06 09:54:57.758169] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:32.624 [2024-12-06 09:54:57.758223] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:32.624 [2024-12-06 09:54:57.758267] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:32.624 [2024-12-06 09:54:57.758276] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:32.624 09:54:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.624 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.624 09:54:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.624 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:32.624 09:54:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.624 09:54:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.624 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:32.624 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:32.624 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:32.624 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:32.624 09:54:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.624 09:54:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.624 [2024-12-06 09:54:57.822012] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:32.624 [2024-12-06 09:54:57.822092] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:32.624 [2024-12-06 09:54:57.822125] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:32.624 [2024-12-06 09:54:57.822164] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:32.624 [2024-12-06 09:54:57.824342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:32.624 [2024-12-06 09:54:57.824412] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:32.624 [2024-12-06 09:54:57.824502] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:32.624 [2024-12-06 09:54:57.824576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:32.624 [2024-12-06 09:54:57.824730] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:32.624 [2024-12-06 09:54:57.824778] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:32.624 [2024-12-06 09:54:57.824815] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:32.624 [2024-12-06 09:54:57.824923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:32.624 [2024-12-06 09:54:57.825031] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:32.624 [2024-12-06 09:54:57.825042] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:32.624 [2024-12-06 09:54:57.825114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:32.624 [2024-12-06 09:54:57.825230] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:32.624 [2024-12-06 09:54:57.825241] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:32.624 [2024-12-06 09:54:57.825342] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.624 pt1 00:17:32.624 09:54:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.624 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:32.624 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:32.624 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:32.624 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.624 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:32.624 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:32.624 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:32.624 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.624 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.624 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.624 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.624 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.624 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.624 09:54:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.624 09:54:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.624 09:54:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.624 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.624 "name": "raid_bdev1", 00:17:32.624 "uuid": "310ab57d-b4db-4c03-9919-e319301158c7", 00:17:32.624 "strip_size_kb": 0, 00:17:32.624 "state": "online", 00:17:32.624 "raid_level": "raid1", 00:17:32.624 "superblock": true, 00:17:32.624 "num_base_bdevs": 2, 00:17:32.624 "num_base_bdevs_discovered": 1, 00:17:32.624 "num_base_bdevs_operational": 1, 00:17:32.624 "base_bdevs_list": [ 00:17:32.624 { 00:17:32.624 "name": null, 00:17:32.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.624 "is_configured": false, 00:17:32.624 "data_offset": 256, 00:17:32.624 "data_size": 7936 00:17:32.624 }, 00:17:32.624 { 00:17:32.624 "name": "pt2", 00:17:32.624 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:32.624 "is_configured": true, 00:17:32.624 "data_offset": 256, 00:17:32.624 "data_size": 7936 00:17:32.624 } 00:17:32.624 ] 00:17:32.624 }' 00:17:32.624 09:54:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.624 09:54:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.195 09:54:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:33.195 09:54:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:33.195 09:54:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.195 09:54:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.195 09:54:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.195 09:54:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:33.195 09:54:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:33.195 09:54:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.195 09:54:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.195 09:54:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:33.195 [2024-12-06 09:54:58.341493] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:33.195 09:54:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.195 09:54:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 310ab57d-b4db-4c03-9919-e319301158c7 '!=' 310ab57d-b4db-4c03-9919-e319301158c7 ']' 00:17:33.195 09:54:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87324 00:17:33.195 09:54:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87324 ']' 00:17:33.195 09:54:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87324 00:17:33.195 09:54:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:33.195 09:54:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:33.195 09:54:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87324 00:17:33.195 09:54:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:33.195 killing process with pid 87324 00:17:33.195 09:54:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:33.195 09:54:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87324' 00:17:33.195 09:54:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87324 00:17:33.195 [2024-12-06 09:54:58.414476] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:33.195 [2024-12-06 09:54:58.414560] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:33.195 [2024-12-06 09:54:58.414609] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:33.195 [2024-12-06 09:54:58.414626] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:33.195 09:54:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87324 00:17:33.455 [2024-12-06 09:54:58.647528] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:34.835 09:54:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:17:34.835 00:17:34.835 real 0m6.148s 00:17:34.835 user 0m9.130s 00:17:34.835 sys 0m1.173s 00:17:34.835 09:54:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:34.835 ************************************ 00:17:34.835 END TEST raid_superblock_test_md_separate 00:17:34.835 ************************************ 00:17:34.835 09:54:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.835 09:54:59 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:17:34.835 09:54:59 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:17:34.835 09:54:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:34.835 09:54:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:34.835 09:54:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:34.835 ************************************ 00:17:34.835 START TEST raid_rebuild_test_sb_md_separate 00:17:34.835 ************************************ 00:17:34.835 09:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:34.835 09:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:34.835 09:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:34.835 09:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:34.835 09:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:34.835 09:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:34.835 09:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:34.835 09:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:34.835 09:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:34.835 09:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:34.835 09:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:34.835 09:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:34.835 09:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:34.835 09:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:34.835 09:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:34.835 09:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:34.835 09:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:34.835 09:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:34.835 09:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:34.835 09:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:34.835 09:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:34.835 09:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:34.835 09:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:34.835 09:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:34.835 09:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:34.835 09:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87647 00:17:34.835 09:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:34.835 09:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87647 00:17:34.835 09:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87647 ']' 00:17:34.835 09:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.835 09:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:34.835 09:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.835 09:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:34.835 09:54:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.835 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:34.835 Zero copy mechanism will not be used. 00:17:34.835 [2024-12-06 09:55:00.009447] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:17:34.835 [2024-12-06 09:55:00.009567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87647 ] 00:17:35.095 [2024-12-06 09:55:00.192221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.095 [2024-12-06 09:55:00.321337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.355 [2024-12-06 09:55:00.551778] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:35.355 [2024-12-06 09:55:00.551818] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:35.615 09:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:35.615 09:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:35.615 09:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:35.615 09:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:17:35.615 09:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.615 09:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.615 BaseBdev1_malloc 00:17:35.615 09:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.615 09:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:35.615 09:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.615 09:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.615 [2024-12-06 09:55:00.876039] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:35.615 [2024-12-06 09:55:00.876201] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.615 [2024-12-06 09:55:00.876234] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:35.615 [2024-12-06 09:55:00.876247] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.615 [2024-12-06 09:55:00.878360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.615 [2024-12-06 09:55:00.878398] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:35.615 BaseBdev1 00:17:35.615 09:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.615 09:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:35.615 09:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:17:35.615 09:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.615 09:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.875 BaseBdev2_malloc 00:17:35.875 09:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.875 09:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:35.875 09:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.875 09:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.875 [2024-12-06 09:55:00.937991] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:35.875 [2024-12-06 09:55:00.938056] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.875 [2024-12-06 09:55:00.938077] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:35.875 [2024-12-06 09:55:00.938090] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.875 [2024-12-06 09:55:00.940135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.875 [2024-12-06 09:55:00.940240] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:35.875 BaseBdev2 00:17:35.875 09:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.875 09:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:17:35.875 09:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.875 09:55:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.875 spare_malloc 00:17:35.875 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.875 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:35.875 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.875 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.875 spare_delay 00:17:35.875 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.875 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:35.875 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.875 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.875 [2024-12-06 09:55:01.039362] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:35.875 [2024-12-06 09:55:01.039506] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.875 [2024-12-06 09:55:01.039532] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:35.875 [2024-12-06 09:55:01.039544] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.875 [2024-12-06 09:55:01.041659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.875 [2024-12-06 09:55:01.041698] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:35.875 spare 00:17:35.875 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.875 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:35.875 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.875 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.875 [2024-12-06 09:55:01.047390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:35.875 [2024-12-06 09:55:01.049378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:35.875 [2024-12-06 09:55:01.049547] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:35.875 [2024-12-06 09:55:01.049569] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:35.875 [2024-12-06 09:55:01.049640] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:35.875 [2024-12-06 09:55:01.049773] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:35.875 [2024-12-06 09:55:01.049782] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:35.875 [2024-12-06 09:55:01.049890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.875 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.875 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:35.875 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.875 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.875 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.875 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.875 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:35.875 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.875 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.875 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.875 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.875 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.875 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.875 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.875 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.875 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.875 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.875 "name": "raid_bdev1", 00:17:35.875 "uuid": "a912d7a9-3046-4e60-abdb-71a8e971d093", 00:17:35.875 "strip_size_kb": 0, 00:17:35.875 "state": "online", 00:17:35.875 "raid_level": "raid1", 00:17:35.875 "superblock": true, 00:17:35.875 "num_base_bdevs": 2, 00:17:35.875 "num_base_bdevs_discovered": 2, 00:17:35.875 "num_base_bdevs_operational": 2, 00:17:35.875 "base_bdevs_list": [ 00:17:35.875 { 00:17:35.875 "name": "BaseBdev1", 00:17:35.876 "uuid": "55e6ff46-7f9b-5bfa-8e9f-db3b016bab5e", 00:17:35.876 "is_configured": true, 00:17:35.876 "data_offset": 256, 00:17:35.876 "data_size": 7936 00:17:35.876 }, 00:17:35.876 { 00:17:35.876 "name": "BaseBdev2", 00:17:35.876 "uuid": "4a3de2d9-e471-54b4-955c-21dd9aa5da51", 00:17:35.876 "is_configured": true, 00:17:35.876 "data_offset": 256, 00:17:35.876 "data_size": 7936 00:17:35.876 } 00:17:35.876 ] 00:17:35.876 }' 00:17:35.876 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.876 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.444 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:36.444 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:36.444 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.444 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.444 [2024-12-06 09:55:01.474875] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:36.444 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.444 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:36.444 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.444 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.444 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.444 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:36.444 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.444 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:36.444 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:36.444 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:36.444 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:36.444 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:36.444 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:36.444 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:36.444 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:36.444 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:36.444 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:36.444 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:36.444 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:36.444 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:36.444 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:36.703 [2024-12-06 09:55:01.718290] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:36.703 /dev/nbd0 00:17:36.703 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:36.703 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:36.703 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:36.703 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:36.703 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:36.703 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:36.703 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:36.703 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:36.703 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:36.703 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:36.703 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:36.703 1+0 records in 00:17:36.703 1+0 records out 00:17:36.703 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365306 s, 11.2 MB/s 00:17:36.703 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:36.703 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:36.703 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:36.703 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:36.703 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:36.703 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:36.703 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:36.703 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:36.703 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:36.703 09:55:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:37.271 7936+0 records in 00:17:37.271 7936+0 records out 00:17:37.271 32505856 bytes (33 MB, 31 MiB) copied, 0.629925 s, 51.6 MB/s 00:17:37.271 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:37.271 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:37.271 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:37.271 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:37.271 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:37.271 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:37.271 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:37.530 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:37.530 [2024-12-06 09:55:02.646381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.530 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:37.530 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:37.530 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:37.530 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:37.530 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:37.531 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:37.531 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:37.531 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:37.531 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.531 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.531 [2024-12-06 09:55:02.663013] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:37.531 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.531 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:37.531 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.531 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.531 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.531 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.531 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:37.531 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.531 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.531 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.531 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.531 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.531 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.531 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.531 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.531 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.531 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.531 "name": "raid_bdev1", 00:17:37.531 "uuid": "a912d7a9-3046-4e60-abdb-71a8e971d093", 00:17:37.531 "strip_size_kb": 0, 00:17:37.531 "state": "online", 00:17:37.531 "raid_level": "raid1", 00:17:37.531 "superblock": true, 00:17:37.531 "num_base_bdevs": 2, 00:17:37.531 "num_base_bdevs_discovered": 1, 00:17:37.531 "num_base_bdevs_operational": 1, 00:17:37.531 "base_bdevs_list": [ 00:17:37.531 { 00:17:37.531 "name": null, 00:17:37.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.531 "is_configured": false, 00:17:37.531 "data_offset": 0, 00:17:37.531 "data_size": 7936 00:17:37.531 }, 00:17:37.531 { 00:17:37.531 "name": "BaseBdev2", 00:17:37.531 "uuid": "4a3de2d9-e471-54b4-955c-21dd9aa5da51", 00:17:37.531 "is_configured": true, 00:17:37.531 "data_offset": 256, 00:17:37.531 "data_size": 7936 00:17:37.531 } 00:17:37.531 ] 00:17:37.531 }' 00:17:37.531 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.531 09:55:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.100 09:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:38.100 09:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.100 09:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.100 [2024-12-06 09:55:03.086304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:38.100 [2024-12-06 09:55:03.098476] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:38.100 09:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.100 09:55:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:38.100 [2024-12-06 09:55:03.100602] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:39.040 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:39.040 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.040 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:39.040 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:39.040 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.040 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.040 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.040 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.040 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.040 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.040 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.040 "name": "raid_bdev1", 00:17:39.040 "uuid": "a912d7a9-3046-4e60-abdb-71a8e971d093", 00:17:39.040 "strip_size_kb": 0, 00:17:39.040 "state": "online", 00:17:39.040 "raid_level": "raid1", 00:17:39.040 "superblock": true, 00:17:39.040 "num_base_bdevs": 2, 00:17:39.040 "num_base_bdevs_discovered": 2, 00:17:39.040 "num_base_bdevs_operational": 2, 00:17:39.040 "process": { 00:17:39.040 "type": "rebuild", 00:17:39.040 "target": "spare", 00:17:39.040 "progress": { 00:17:39.040 "blocks": 2560, 00:17:39.040 "percent": 32 00:17:39.040 } 00:17:39.040 }, 00:17:39.040 "base_bdevs_list": [ 00:17:39.040 { 00:17:39.040 "name": "spare", 00:17:39.040 "uuid": "50f1cf67-dec3-50b0-9925-ba76da0e754a", 00:17:39.040 "is_configured": true, 00:17:39.040 "data_offset": 256, 00:17:39.040 "data_size": 7936 00:17:39.040 }, 00:17:39.040 { 00:17:39.040 "name": "BaseBdev2", 00:17:39.040 "uuid": "4a3de2d9-e471-54b4-955c-21dd9aa5da51", 00:17:39.040 "is_configured": true, 00:17:39.040 "data_offset": 256, 00:17:39.040 "data_size": 7936 00:17:39.040 } 00:17:39.040 ] 00:17:39.040 }' 00:17:39.040 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.040 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:39.040 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.040 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:39.040 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:39.040 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.041 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.041 [2024-12-06 09:55:04.236625] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:39.041 [2024-12-06 09:55:04.309349] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:39.041 [2024-12-06 09:55:04.309472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.041 [2024-12-06 09:55:04.309489] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:39.041 [2024-12-06 09:55:04.309504] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:39.300 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.300 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:39.300 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:39.300 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.300 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:39.300 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:39.300 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:39.300 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.300 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.300 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.300 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.300 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.300 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.300 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.300 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.300 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.300 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.300 "name": "raid_bdev1", 00:17:39.300 "uuid": "a912d7a9-3046-4e60-abdb-71a8e971d093", 00:17:39.300 "strip_size_kb": 0, 00:17:39.300 "state": "online", 00:17:39.300 "raid_level": "raid1", 00:17:39.300 "superblock": true, 00:17:39.300 "num_base_bdevs": 2, 00:17:39.300 "num_base_bdevs_discovered": 1, 00:17:39.300 "num_base_bdevs_operational": 1, 00:17:39.300 "base_bdevs_list": [ 00:17:39.300 { 00:17:39.300 "name": null, 00:17:39.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.300 "is_configured": false, 00:17:39.300 "data_offset": 0, 00:17:39.300 "data_size": 7936 00:17:39.300 }, 00:17:39.300 { 00:17:39.300 "name": "BaseBdev2", 00:17:39.300 "uuid": "4a3de2d9-e471-54b4-955c-21dd9aa5da51", 00:17:39.300 "is_configured": true, 00:17:39.300 "data_offset": 256, 00:17:39.300 "data_size": 7936 00:17:39.300 } 00:17:39.300 ] 00:17:39.300 }' 00:17:39.300 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.300 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.559 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:39.559 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.559 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:39.559 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:39.559 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.559 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.559 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.559 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.559 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.559 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.559 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.559 "name": "raid_bdev1", 00:17:39.559 "uuid": "a912d7a9-3046-4e60-abdb-71a8e971d093", 00:17:39.559 "strip_size_kb": 0, 00:17:39.559 "state": "online", 00:17:39.559 "raid_level": "raid1", 00:17:39.559 "superblock": true, 00:17:39.559 "num_base_bdevs": 2, 00:17:39.559 "num_base_bdevs_discovered": 1, 00:17:39.559 "num_base_bdevs_operational": 1, 00:17:39.559 "base_bdevs_list": [ 00:17:39.559 { 00:17:39.559 "name": null, 00:17:39.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.559 "is_configured": false, 00:17:39.559 "data_offset": 0, 00:17:39.559 "data_size": 7936 00:17:39.559 }, 00:17:39.559 { 00:17:39.559 "name": "BaseBdev2", 00:17:39.559 "uuid": "4a3de2d9-e471-54b4-955c-21dd9aa5da51", 00:17:39.559 "is_configured": true, 00:17:39.559 "data_offset": 256, 00:17:39.559 "data_size": 7936 00:17:39.559 } 00:17:39.559 ] 00:17:39.559 }' 00:17:39.559 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.559 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:39.559 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.819 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:39.819 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:39.819 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.819 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.819 [2024-12-06 09:55:04.842450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:39.819 [2024-12-06 09:55:04.856098] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:39.819 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.819 09:55:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:39.819 [2024-12-06 09:55:04.858193] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:40.755 09:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:40.755 09:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.755 09:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:40.755 09:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:40.755 09:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.755 09:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.755 09:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.755 09:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.755 09:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.755 09:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.755 09:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.755 "name": "raid_bdev1", 00:17:40.755 "uuid": "a912d7a9-3046-4e60-abdb-71a8e971d093", 00:17:40.755 "strip_size_kb": 0, 00:17:40.755 "state": "online", 00:17:40.755 "raid_level": "raid1", 00:17:40.755 "superblock": true, 00:17:40.755 "num_base_bdevs": 2, 00:17:40.755 "num_base_bdevs_discovered": 2, 00:17:40.755 "num_base_bdevs_operational": 2, 00:17:40.755 "process": { 00:17:40.755 "type": "rebuild", 00:17:40.755 "target": "spare", 00:17:40.755 "progress": { 00:17:40.755 "blocks": 2560, 00:17:40.755 "percent": 32 00:17:40.755 } 00:17:40.755 }, 00:17:40.755 "base_bdevs_list": [ 00:17:40.755 { 00:17:40.755 "name": "spare", 00:17:40.755 "uuid": "50f1cf67-dec3-50b0-9925-ba76da0e754a", 00:17:40.755 "is_configured": true, 00:17:40.755 "data_offset": 256, 00:17:40.755 "data_size": 7936 00:17:40.755 }, 00:17:40.755 { 00:17:40.755 "name": "BaseBdev2", 00:17:40.755 "uuid": "4a3de2d9-e471-54b4-955c-21dd9aa5da51", 00:17:40.755 "is_configured": true, 00:17:40.755 "data_offset": 256, 00:17:40.755 "data_size": 7936 00:17:40.755 } 00:17:40.755 ] 00:17:40.755 }' 00:17:40.755 09:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.755 09:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:40.755 09:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.755 09:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:40.755 09:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:40.755 09:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:40.755 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:40.755 09:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:40.756 09:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:40.756 09:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:40.756 09:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=699 00:17:40.756 09:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:40.756 09:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:40.756 09:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.756 09:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:40.756 09:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:40.756 09:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.756 09:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.756 09:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.756 09:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.756 09:55:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.756 09:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.015 09:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.015 "name": "raid_bdev1", 00:17:41.015 "uuid": "a912d7a9-3046-4e60-abdb-71a8e971d093", 00:17:41.015 "strip_size_kb": 0, 00:17:41.015 "state": "online", 00:17:41.015 "raid_level": "raid1", 00:17:41.015 "superblock": true, 00:17:41.015 "num_base_bdevs": 2, 00:17:41.015 "num_base_bdevs_discovered": 2, 00:17:41.015 "num_base_bdevs_operational": 2, 00:17:41.015 "process": { 00:17:41.015 "type": "rebuild", 00:17:41.015 "target": "spare", 00:17:41.015 "progress": { 00:17:41.015 "blocks": 2816, 00:17:41.015 "percent": 35 00:17:41.015 } 00:17:41.015 }, 00:17:41.015 "base_bdevs_list": [ 00:17:41.015 { 00:17:41.015 "name": "spare", 00:17:41.015 "uuid": "50f1cf67-dec3-50b0-9925-ba76da0e754a", 00:17:41.015 "is_configured": true, 00:17:41.015 "data_offset": 256, 00:17:41.015 "data_size": 7936 00:17:41.015 }, 00:17:41.015 { 00:17:41.015 "name": "BaseBdev2", 00:17:41.015 "uuid": "4a3de2d9-e471-54b4-955c-21dd9aa5da51", 00:17:41.015 "is_configured": true, 00:17:41.015 "data_offset": 256, 00:17:41.015 "data_size": 7936 00:17:41.015 } 00:17:41.015 ] 00:17:41.015 }' 00:17:41.015 09:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.015 09:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:41.015 09:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.015 09:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.015 09:55:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:41.952 09:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:41.952 09:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.952 09:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.952 09:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.952 09:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.952 09:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.952 09:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.952 09:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.952 09:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.952 09:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.952 09:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.952 09:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.952 "name": "raid_bdev1", 00:17:41.952 "uuid": "a912d7a9-3046-4e60-abdb-71a8e971d093", 00:17:41.952 "strip_size_kb": 0, 00:17:41.952 "state": "online", 00:17:41.952 "raid_level": "raid1", 00:17:41.952 "superblock": true, 00:17:41.952 "num_base_bdevs": 2, 00:17:41.952 "num_base_bdevs_discovered": 2, 00:17:41.952 "num_base_bdevs_operational": 2, 00:17:41.952 "process": { 00:17:41.952 "type": "rebuild", 00:17:41.952 "target": "spare", 00:17:41.952 "progress": { 00:17:41.952 "blocks": 5632, 00:17:41.952 "percent": 70 00:17:41.952 } 00:17:41.952 }, 00:17:41.952 "base_bdevs_list": [ 00:17:41.952 { 00:17:41.952 "name": "spare", 00:17:41.952 "uuid": "50f1cf67-dec3-50b0-9925-ba76da0e754a", 00:17:41.952 "is_configured": true, 00:17:41.952 "data_offset": 256, 00:17:41.952 "data_size": 7936 00:17:41.952 }, 00:17:41.952 { 00:17:41.952 "name": "BaseBdev2", 00:17:41.953 "uuid": "4a3de2d9-e471-54b4-955c-21dd9aa5da51", 00:17:41.953 "is_configured": true, 00:17:41.953 "data_offset": 256, 00:17:41.953 "data_size": 7936 00:17:41.953 } 00:17:41.953 ] 00:17:41.953 }' 00:17:41.953 09:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.212 09:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:42.212 09:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.212 09:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:42.212 09:55:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:42.782 [2024-12-06 09:55:07.980216] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:42.782 [2024-12-06 09:55:07.980368] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:42.782 [2024-12-06 09:55:07.980518] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.042 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:43.042 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.042 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.042 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.042 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.042 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.042 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.042 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.042 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.042 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.302 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.302 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.302 "name": "raid_bdev1", 00:17:43.302 "uuid": "a912d7a9-3046-4e60-abdb-71a8e971d093", 00:17:43.302 "strip_size_kb": 0, 00:17:43.302 "state": "online", 00:17:43.302 "raid_level": "raid1", 00:17:43.302 "superblock": true, 00:17:43.302 "num_base_bdevs": 2, 00:17:43.302 "num_base_bdevs_discovered": 2, 00:17:43.302 "num_base_bdevs_operational": 2, 00:17:43.302 "base_bdevs_list": [ 00:17:43.302 { 00:17:43.302 "name": "spare", 00:17:43.302 "uuid": "50f1cf67-dec3-50b0-9925-ba76da0e754a", 00:17:43.302 "is_configured": true, 00:17:43.302 "data_offset": 256, 00:17:43.302 "data_size": 7936 00:17:43.302 }, 00:17:43.302 { 00:17:43.302 "name": "BaseBdev2", 00:17:43.302 "uuid": "4a3de2d9-e471-54b4-955c-21dd9aa5da51", 00:17:43.302 "is_configured": true, 00:17:43.302 "data_offset": 256, 00:17:43.302 "data_size": 7936 00:17:43.302 } 00:17:43.302 ] 00:17:43.302 }' 00:17:43.302 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.302 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:43.302 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.302 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:43.302 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:17:43.302 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:43.302 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.302 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:43.302 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:43.302 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.302 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.302 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.302 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.302 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.302 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.302 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.302 "name": "raid_bdev1", 00:17:43.302 "uuid": "a912d7a9-3046-4e60-abdb-71a8e971d093", 00:17:43.302 "strip_size_kb": 0, 00:17:43.302 "state": "online", 00:17:43.302 "raid_level": "raid1", 00:17:43.302 "superblock": true, 00:17:43.302 "num_base_bdevs": 2, 00:17:43.302 "num_base_bdevs_discovered": 2, 00:17:43.302 "num_base_bdevs_operational": 2, 00:17:43.302 "base_bdevs_list": [ 00:17:43.302 { 00:17:43.302 "name": "spare", 00:17:43.302 "uuid": "50f1cf67-dec3-50b0-9925-ba76da0e754a", 00:17:43.302 "is_configured": true, 00:17:43.302 "data_offset": 256, 00:17:43.302 "data_size": 7936 00:17:43.302 }, 00:17:43.302 { 00:17:43.302 "name": "BaseBdev2", 00:17:43.302 "uuid": "4a3de2d9-e471-54b4-955c-21dd9aa5da51", 00:17:43.302 "is_configured": true, 00:17:43.302 "data_offset": 256, 00:17:43.302 "data_size": 7936 00:17:43.302 } 00:17:43.302 ] 00:17:43.302 }' 00:17:43.302 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.302 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:43.302 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.302 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:43.302 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:43.302 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.302 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.302 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.302 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.302 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:43.302 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.302 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.302 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.302 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.302 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.302 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.303 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.303 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.303 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.563 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.563 "name": "raid_bdev1", 00:17:43.563 "uuid": "a912d7a9-3046-4e60-abdb-71a8e971d093", 00:17:43.563 "strip_size_kb": 0, 00:17:43.563 "state": "online", 00:17:43.563 "raid_level": "raid1", 00:17:43.563 "superblock": true, 00:17:43.563 "num_base_bdevs": 2, 00:17:43.563 "num_base_bdevs_discovered": 2, 00:17:43.563 "num_base_bdevs_operational": 2, 00:17:43.563 "base_bdevs_list": [ 00:17:43.563 { 00:17:43.563 "name": "spare", 00:17:43.563 "uuid": "50f1cf67-dec3-50b0-9925-ba76da0e754a", 00:17:43.563 "is_configured": true, 00:17:43.563 "data_offset": 256, 00:17:43.563 "data_size": 7936 00:17:43.563 }, 00:17:43.563 { 00:17:43.563 "name": "BaseBdev2", 00:17:43.563 "uuid": "4a3de2d9-e471-54b4-955c-21dd9aa5da51", 00:17:43.563 "is_configured": true, 00:17:43.563 "data_offset": 256, 00:17:43.563 "data_size": 7936 00:17:43.563 } 00:17:43.563 ] 00:17:43.563 }' 00:17:43.563 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.563 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.824 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:43.824 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.824 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.824 [2024-12-06 09:55:08.960455] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:43.824 [2024-12-06 09:55:08.960489] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:43.824 [2024-12-06 09:55:08.960580] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:43.824 [2024-12-06 09:55:08.960650] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:43.824 [2024-12-06 09:55:08.960660] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:43.824 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.824 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.824 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.824 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:17:43.824 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.824 09:55:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.824 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:43.824 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:43.824 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:43.824 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:43.824 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:43.824 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:43.824 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:43.824 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:43.824 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:43.824 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:43.824 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:43.824 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:43.824 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:44.180 /dev/nbd0 00:17:44.180 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:44.180 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:44.180 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:44.180 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:44.180 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:44.180 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:44.180 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:44.180 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:44.180 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:44.180 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:44.180 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:44.180 1+0 records in 00:17:44.180 1+0 records out 00:17:44.180 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447124 s, 9.2 MB/s 00:17:44.180 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:44.180 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:44.180 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:44.180 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:44.180 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:44.180 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:44.180 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:44.180 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:44.440 /dev/nbd1 00:17:44.440 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:44.440 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:44.440 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:44.440 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:44.440 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:44.440 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:44.440 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:44.440 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:44.440 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:44.440 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:44.440 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:44.440 1+0 records in 00:17:44.440 1+0 records out 00:17:44.440 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302743 s, 13.5 MB/s 00:17:44.440 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:44.440 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:44.440 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:44.440 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:44.440 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:44.440 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:44.440 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:44.440 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:44.440 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:44.440 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:44.440 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:44.440 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:44.440 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:44.440 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:44.440 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:44.699 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:44.699 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:44.699 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:44.699 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:44.699 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:44.699 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:44.699 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:44.699 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:44.699 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:44.699 09:55:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:44.959 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:44.959 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:44.959 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:44.959 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:44.959 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:44.959 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:44.959 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:44.959 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:44.959 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:44.959 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:44.959 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.959 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.959 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.959 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:44.959 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.959 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.959 [2024-12-06 09:55:10.146650] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:44.959 [2024-12-06 09:55:10.146710] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.959 [2024-12-06 09:55:10.146752] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:44.959 [2024-12-06 09:55:10.146762] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.959 [2024-12-06 09:55:10.148872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.959 [2024-12-06 09:55:10.148910] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:44.959 [2024-12-06 09:55:10.148968] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:44.959 [2024-12-06 09:55:10.149027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:44.959 [2024-12-06 09:55:10.149194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:44.959 spare 00:17:44.959 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.959 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:44.959 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.959 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.219 [2024-12-06 09:55:10.249087] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:45.219 [2024-12-06 09:55:10.249117] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:45.219 [2024-12-06 09:55:10.249234] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:45.219 [2024-12-06 09:55:10.249360] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:45.219 [2024-12-06 09:55:10.249369] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:45.219 [2024-12-06 09:55:10.249470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.219 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.219 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:45.219 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.219 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.219 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.219 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.219 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:45.219 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.219 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.219 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.219 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.219 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.219 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.219 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.219 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.219 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.219 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.219 "name": "raid_bdev1", 00:17:45.219 "uuid": "a912d7a9-3046-4e60-abdb-71a8e971d093", 00:17:45.219 "strip_size_kb": 0, 00:17:45.219 "state": "online", 00:17:45.219 "raid_level": "raid1", 00:17:45.219 "superblock": true, 00:17:45.219 "num_base_bdevs": 2, 00:17:45.219 "num_base_bdevs_discovered": 2, 00:17:45.219 "num_base_bdevs_operational": 2, 00:17:45.219 "base_bdevs_list": [ 00:17:45.219 { 00:17:45.219 "name": "spare", 00:17:45.219 "uuid": "50f1cf67-dec3-50b0-9925-ba76da0e754a", 00:17:45.219 "is_configured": true, 00:17:45.219 "data_offset": 256, 00:17:45.219 "data_size": 7936 00:17:45.219 }, 00:17:45.219 { 00:17:45.219 "name": "BaseBdev2", 00:17:45.219 "uuid": "4a3de2d9-e471-54b4-955c-21dd9aa5da51", 00:17:45.219 "is_configured": true, 00:17:45.219 "data_offset": 256, 00:17:45.219 "data_size": 7936 00:17:45.219 } 00:17:45.219 ] 00:17:45.219 }' 00:17:45.219 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.219 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.479 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:45.479 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.479 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:45.479 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:45.479 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.479 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.479 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.479 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.479 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.479 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.738 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.738 "name": "raid_bdev1", 00:17:45.738 "uuid": "a912d7a9-3046-4e60-abdb-71a8e971d093", 00:17:45.738 "strip_size_kb": 0, 00:17:45.738 "state": "online", 00:17:45.738 "raid_level": "raid1", 00:17:45.738 "superblock": true, 00:17:45.738 "num_base_bdevs": 2, 00:17:45.738 "num_base_bdevs_discovered": 2, 00:17:45.738 "num_base_bdevs_operational": 2, 00:17:45.738 "base_bdevs_list": [ 00:17:45.738 { 00:17:45.738 "name": "spare", 00:17:45.738 "uuid": "50f1cf67-dec3-50b0-9925-ba76da0e754a", 00:17:45.738 "is_configured": true, 00:17:45.738 "data_offset": 256, 00:17:45.738 "data_size": 7936 00:17:45.738 }, 00:17:45.738 { 00:17:45.738 "name": "BaseBdev2", 00:17:45.738 "uuid": "4a3de2d9-e471-54b4-955c-21dd9aa5da51", 00:17:45.738 "is_configured": true, 00:17:45.738 "data_offset": 256, 00:17:45.738 "data_size": 7936 00:17:45.738 } 00:17:45.738 ] 00:17:45.738 }' 00:17:45.738 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.738 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:45.738 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.738 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:45.738 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.738 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:45.738 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.738 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.738 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.738 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:45.738 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:45.738 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.738 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.738 [2024-12-06 09:55:10.913389] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:45.738 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.738 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:45.738 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.738 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.738 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.738 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.738 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:45.738 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.738 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.738 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.738 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.738 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.738 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.739 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.739 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.739 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.739 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.739 "name": "raid_bdev1", 00:17:45.739 "uuid": "a912d7a9-3046-4e60-abdb-71a8e971d093", 00:17:45.739 "strip_size_kb": 0, 00:17:45.739 "state": "online", 00:17:45.739 "raid_level": "raid1", 00:17:45.739 "superblock": true, 00:17:45.739 "num_base_bdevs": 2, 00:17:45.739 "num_base_bdevs_discovered": 1, 00:17:45.739 "num_base_bdevs_operational": 1, 00:17:45.739 "base_bdevs_list": [ 00:17:45.739 { 00:17:45.739 "name": null, 00:17:45.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.739 "is_configured": false, 00:17:45.739 "data_offset": 0, 00:17:45.739 "data_size": 7936 00:17:45.739 }, 00:17:45.739 { 00:17:45.739 "name": "BaseBdev2", 00:17:45.739 "uuid": "4a3de2d9-e471-54b4-955c-21dd9aa5da51", 00:17:45.739 "is_configured": true, 00:17:45.739 "data_offset": 256, 00:17:45.739 "data_size": 7936 00:17:45.739 } 00:17:45.739 ] 00:17:45.739 }' 00:17:45.739 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.739 09:55:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.309 09:55:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:46.309 09:55:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.309 09:55:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.309 [2024-12-06 09:55:11.368636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:46.309 [2024-12-06 09:55:11.368895] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:46.309 [2024-12-06 09:55:11.368959] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:46.309 [2024-12-06 09:55:11.369044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:46.309 [2024-12-06 09:55:11.382627] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:46.309 09:55:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.309 09:55:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:46.309 [2024-12-06 09:55:11.384739] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:47.246 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:47.246 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.246 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:47.246 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:47.246 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.246 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.246 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.246 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.246 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.246 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.246 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.246 "name": "raid_bdev1", 00:17:47.246 "uuid": "a912d7a9-3046-4e60-abdb-71a8e971d093", 00:17:47.246 "strip_size_kb": 0, 00:17:47.246 "state": "online", 00:17:47.246 "raid_level": "raid1", 00:17:47.246 "superblock": true, 00:17:47.246 "num_base_bdevs": 2, 00:17:47.246 "num_base_bdevs_discovered": 2, 00:17:47.246 "num_base_bdevs_operational": 2, 00:17:47.246 "process": { 00:17:47.246 "type": "rebuild", 00:17:47.246 "target": "spare", 00:17:47.246 "progress": { 00:17:47.246 "blocks": 2560, 00:17:47.246 "percent": 32 00:17:47.246 } 00:17:47.246 }, 00:17:47.246 "base_bdevs_list": [ 00:17:47.246 { 00:17:47.246 "name": "spare", 00:17:47.246 "uuid": "50f1cf67-dec3-50b0-9925-ba76da0e754a", 00:17:47.246 "is_configured": true, 00:17:47.246 "data_offset": 256, 00:17:47.246 "data_size": 7936 00:17:47.246 }, 00:17:47.246 { 00:17:47.246 "name": "BaseBdev2", 00:17:47.246 "uuid": "4a3de2d9-e471-54b4-955c-21dd9aa5da51", 00:17:47.246 "is_configured": true, 00:17:47.246 "data_offset": 256, 00:17:47.246 "data_size": 7936 00:17:47.246 } 00:17:47.246 ] 00:17:47.246 }' 00:17:47.246 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.246 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:47.246 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.505 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:47.506 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:47.506 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.506 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.506 [2024-12-06 09:55:12.536520] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:47.506 [2024-12-06 09:55:12.593165] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:47.506 [2024-12-06 09:55:12.593286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.506 [2024-12-06 09:55:12.593331] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:47.506 [2024-12-06 09:55:12.593368] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:47.506 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.506 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:47.506 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.506 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.506 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:47.506 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:47.506 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:47.506 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.506 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.506 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.506 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.506 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.506 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.506 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.506 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.506 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.506 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.506 "name": "raid_bdev1", 00:17:47.506 "uuid": "a912d7a9-3046-4e60-abdb-71a8e971d093", 00:17:47.506 "strip_size_kb": 0, 00:17:47.506 "state": "online", 00:17:47.506 "raid_level": "raid1", 00:17:47.506 "superblock": true, 00:17:47.506 "num_base_bdevs": 2, 00:17:47.506 "num_base_bdevs_discovered": 1, 00:17:47.506 "num_base_bdevs_operational": 1, 00:17:47.506 "base_bdevs_list": [ 00:17:47.506 { 00:17:47.506 "name": null, 00:17:47.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.506 "is_configured": false, 00:17:47.506 "data_offset": 0, 00:17:47.506 "data_size": 7936 00:17:47.506 }, 00:17:47.506 { 00:17:47.506 "name": "BaseBdev2", 00:17:47.506 "uuid": "4a3de2d9-e471-54b4-955c-21dd9aa5da51", 00:17:47.506 "is_configured": true, 00:17:47.506 "data_offset": 256, 00:17:47.506 "data_size": 7936 00:17:47.506 } 00:17:47.506 ] 00:17:47.506 }' 00:17:47.506 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.506 09:55:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.075 09:55:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:48.075 09:55:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.075 09:55:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.075 [2024-12-06 09:55:13.089344] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:48.075 [2024-12-06 09:55:13.089416] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.075 [2024-12-06 09:55:13.089448] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:48.075 [2024-12-06 09:55:13.089461] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.075 [2024-12-06 09:55:13.089757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.075 [2024-12-06 09:55:13.089787] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:48.075 [2024-12-06 09:55:13.089855] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:48.075 [2024-12-06 09:55:13.089871] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:48.075 [2024-12-06 09:55:13.089882] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:48.075 [2024-12-06 09:55:13.089909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:48.075 [2024-12-06 09:55:13.103265] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:48.075 spare 00:17:48.075 09:55:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.075 09:55:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:48.075 [2024-12-06 09:55:13.105403] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:49.014 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:49.014 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.014 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:49.014 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:49.014 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.014 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.014 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.014 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.014 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.014 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.014 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.014 "name": "raid_bdev1", 00:17:49.014 "uuid": "a912d7a9-3046-4e60-abdb-71a8e971d093", 00:17:49.014 "strip_size_kb": 0, 00:17:49.014 "state": "online", 00:17:49.014 "raid_level": "raid1", 00:17:49.014 "superblock": true, 00:17:49.014 "num_base_bdevs": 2, 00:17:49.014 "num_base_bdevs_discovered": 2, 00:17:49.014 "num_base_bdevs_operational": 2, 00:17:49.014 "process": { 00:17:49.014 "type": "rebuild", 00:17:49.014 "target": "spare", 00:17:49.014 "progress": { 00:17:49.014 "blocks": 2560, 00:17:49.014 "percent": 32 00:17:49.014 } 00:17:49.014 }, 00:17:49.014 "base_bdevs_list": [ 00:17:49.014 { 00:17:49.014 "name": "spare", 00:17:49.014 "uuid": "50f1cf67-dec3-50b0-9925-ba76da0e754a", 00:17:49.014 "is_configured": true, 00:17:49.014 "data_offset": 256, 00:17:49.014 "data_size": 7936 00:17:49.014 }, 00:17:49.014 { 00:17:49.014 "name": "BaseBdev2", 00:17:49.014 "uuid": "4a3de2d9-e471-54b4-955c-21dd9aa5da51", 00:17:49.014 "is_configured": true, 00:17:49.014 "data_offset": 256, 00:17:49.014 "data_size": 7936 00:17:49.014 } 00:17:49.014 ] 00:17:49.014 }' 00:17:49.014 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.014 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:49.014 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.014 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:49.014 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:49.014 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.014 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.014 [2024-12-06 09:55:14.245369] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:49.273 [2024-12-06 09:55:14.313995] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:49.273 [2024-12-06 09:55:14.314056] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.273 [2024-12-06 09:55:14.314074] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:49.273 [2024-12-06 09:55:14.314081] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:49.273 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.273 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:49.273 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.273 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.273 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.273 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.273 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:49.273 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.273 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.273 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.273 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.273 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.273 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.273 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.273 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.273 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.273 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.273 "name": "raid_bdev1", 00:17:49.273 "uuid": "a912d7a9-3046-4e60-abdb-71a8e971d093", 00:17:49.273 "strip_size_kb": 0, 00:17:49.273 "state": "online", 00:17:49.273 "raid_level": "raid1", 00:17:49.273 "superblock": true, 00:17:49.273 "num_base_bdevs": 2, 00:17:49.273 "num_base_bdevs_discovered": 1, 00:17:49.273 "num_base_bdevs_operational": 1, 00:17:49.273 "base_bdevs_list": [ 00:17:49.273 { 00:17:49.273 "name": null, 00:17:49.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.273 "is_configured": false, 00:17:49.273 "data_offset": 0, 00:17:49.273 "data_size": 7936 00:17:49.273 }, 00:17:49.273 { 00:17:49.273 "name": "BaseBdev2", 00:17:49.273 "uuid": "4a3de2d9-e471-54b4-955c-21dd9aa5da51", 00:17:49.273 "is_configured": true, 00:17:49.273 "data_offset": 256, 00:17:49.273 "data_size": 7936 00:17:49.273 } 00:17:49.273 ] 00:17:49.273 }' 00:17:49.273 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.273 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.532 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:49.532 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.532 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:49.532 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:49.532 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.532 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.532 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.532 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.532 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.790 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.790 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.790 "name": "raid_bdev1", 00:17:49.790 "uuid": "a912d7a9-3046-4e60-abdb-71a8e971d093", 00:17:49.790 "strip_size_kb": 0, 00:17:49.790 "state": "online", 00:17:49.790 "raid_level": "raid1", 00:17:49.790 "superblock": true, 00:17:49.790 "num_base_bdevs": 2, 00:17:49.790 "num_base_bdevs_discovered": 1, 00:17:49.790 "num_base_bdevs_operational": 1, 00:17:49.790 "base_bdevs_list": [ 00:17:49.790 { 00:17:49.790 "name": null, 00:17:49.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.790 "is_configured": false, 00:17:49.790 "data_offset": 0, 00:17:49.790 "data_size": 7936 00:17:49.790 }, 00:17:49.790 { 00:17:49.790 "name": "BaseBdev2", 00:17:49.790 "uuid": "4a3de2d9-e471-54b4-955c-21dd9aa5da51", 00:17:49.790 "is_configured": true, 00:17:49.790 "data_offset": 256, 00:17:49.790 "data_size": 7936 00:17:49.790 } 00:17:49.790 ] 00:17:49.790 }' 00:17:49.790 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.790 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:49.790 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.790 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:49.790 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:49.790 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.790 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.790 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.790 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:49.790 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.790 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.790 [2024-12-06 09:55:14.950494] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:49.790 [2024-12-06 09:55:14.950550] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.790 [2024-12-06 09:55:14.950576] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:49.790 [2024-12-06 09:55:14.950585] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.790 [2024-12-06 09:55:14.950836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.790 [2024-12-06 09:55:14.950847] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:49.790 [2024-12-06 09:55:14.950898] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:49.790 [2024-12-06 09:55:14.950910] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:49.790 [2024-12-06 09:55:14.950921] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:49.790 [2024-12-06 09:55:14.950931] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:49.790 BaseBdev1 00:17:49.790 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.790 09:55:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:50.728 09:55:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:50.728 09:55:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.728 09:55:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.728 09:55:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.728 09:55:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.728 09:55:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:50.728 09:55:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.728 09:55:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.728 09:55:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.728 09:55:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.728 09:55:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.728 09:55:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.728 09:55:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.728 09:55:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.728 09:55:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.001 09:55:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.001 "name": "raid_bdev1", 00:17:51.001 "uuid": "a912d7a9-3046-4e60-abdb-71a8e971d093", 00:17:51.001 "strip_size_kb": 0, 00:17:51.001 "state": "online", 00:17:51.001 "raid_level": "raid1", 00:17:51.001 "superblock": true, 00:17:51.001 "num_base_bdevs": 2, 00:17:51.001 "num_base_bdevs_discovered": 1, 00:17:51.001 "num_base_bdevs_operational": 1, 00:17:51.001 "base_bdevs_list": [ 00:17:51.001 { 00:17:51.002 "name": null, 00:17:51.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.002 "is_configured": false, 00:17:51.002 "data_offset": 0, 00:17:51.002 "data_size": 7936 00:17:51.002 }, 00:17:51.002 { 00:17:51.002 "name": "BaseBdev2", 00:17:51.002 "uuid": "4a3de2d9-e471-54b4-955c-21dd9aa5da51", 00:17:51.002 "is_configured": true, 00:17:51.002 "data_offset": 256, 00:17:51.002 "data_size": 7936 00:17:51.002 } 00:17:51.002 ] 00:17:51.002 }' 00:17:51.002 09:55:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.002 09:55:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.260 09:55:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:51.260 09:55:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:51.260 09:55:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:51.260 09:55:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:51.260 09:55:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:51.260 09:55:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.260 09:55:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.260 09:55:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.260 09:55:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.260 09:55:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.260 09:55:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:51.260 "name": "raid_bdev1", 00:17:51.260 "uuid": "a912d7a9-3046-4e60-abdb-71a8e971d093", 00:17:51.260 "strip_size_kb": 0, 00:17:51.260 "state": "online", 00:17:51.260 "raid_level": "raid1", 00:17:51.260 "superblock": true, 00:17:51.260 "num_base_bdevs": 2, 00:17:51.260 "num_base_bdevs_discovered": 1, 00:17:51.260 "num_base_bdevs_operational": 1, 00:17:51.260 "base_bdevs_list": [ 00:17:51.260 { 00:17:51.260 "name": null, 00:17:51.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.260 "is_configured": false, 00:17:51.260 "data_offset": 0, 00:17:51.260 "data_size": 7936 00:17:51.260 }, 00:17:51.260 { 00:17:51.260 "name": "BaseBdev2", 00:17:51.260 "uuid": "4a3de2d9-e471-54b4-955c-21dd9aa5da51", 00:17:51.260 "is_configured": true, 00:17:51.260 "data_offset": 256, 00:17:51.260 "data_size": 7936 00:17:51.260 } 00:17:51.260 ] 00:17:51.260 }' 00:17:51.260 09:55:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:51.517 09:55:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:51.518 09:55:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.518 09:55:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:51.518 09:55:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:51.518 09:55:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:17:51.518 09:55:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:51.518 09:55:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:51.518 09:55:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.518 09:55:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:51.518 09:55:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.518 09:55:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:51.518 09:55:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.518 09:55:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.518 [2024-12-06 09:55:16.603736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:51.518 [2024-12-06 09:55:16.603941] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:51.518 [2024-12-06 09:55:16.603958] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:51.518 request: 00:17:51.518 { 00:17:51.518 "base_bdev": "BaseBdev1", 00:17:51.518 "raid_bdev": "raid_bdev1", 00:17:51.518 "method": "bdev_raid_add_base_bdev", 00:17:51.518 "req_id": 1 00:17:51.518 } 00:17:51.518 Got JSON-RPC error response 00:17:51.518 response: 00:17:51.518 { 00:17:51.518 "code": -22, 00:17:51.518 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:51.518 } 00:17:51.518 09:55:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:51.518 09:55:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:17:51.518 09:55:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:51.518 09:55:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:51.518 09:55:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:51.518 09:55:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:52.455 09:55:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:52.455 09:55:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.455 09:55:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.455 09:55:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.455 09:55:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.455 09:55:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:52.455 09:55:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.455 09:55:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.455 09:55:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.455 09:55:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.455 09:55:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.455 09:55:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.455 09:55:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.455 09:55:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.455 09:55:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.455 09:55:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.455 "name": "raid_bdev1", 00:17:52.455 "uuid": "a912d7a9-3046-4e60-abdb-71a8e971d093", 00:17:52.455 "strip_size_kb": 0, 00:17:52.455 "state": "online", 00:17:52.455 "raid_level": "raid1", 00:17:52.455 "superblock": true, 00:17:52.455 "num_base_bdevs": 2, 00:17:52.455 "num_base_bdevs_discovered": 1, 00:17:52.455 "num_base_bdevs_operational": 1, 00:17:52.455 "base_bdevs_list": [ 00:17:52.455 { 00:17:52.455 "name": null, 00:17:52.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.455 "is_configured": false, 00:17:52.455 "data_offset": 0, 00:17:52.455 "data_size": 7936 00:17:52.455 }, 00:17:52.455 { 00:17:52.455 "name": "BaseBdev2", 00:17:52.455 "uuid": "4a3de2d9-e471-54b4-955c-21dd9aa5da51", 00:17:52.455 "is_configured": true, 00:17:52.455 "data_offset": 256, 00:17:52.455 "data_size": 7936 00:17:52.455 } 00:17:52.455 ] 00:17:52.455 }' 00:17:52.455 09:55:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.455 09:55:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.023 09:55:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:53.023 09:55:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:53.023 09:55:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:53.023 09:55:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:53.023 09:55:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:53.023 09:55:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.023 09:55:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.023 09:55:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.023 09:55:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.023 09:55:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.023 09:55:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:53.023 "name": "raid_bdev1", 00:17:53.023 "uuid": "a912d7a9-3046-4e60-abdb-71a8e971d093", 00:17:53.023 "strip_size_kb": 0, 00:17:53.023 "state": "online", 00:17:53.023 "raid_level": "raid1", 00:17:53.023 "superblock": true, 00:17:53.023 "num_base_bdevs": 2, 00:17:53.023 "num_base_bdevs_discovered": 1, 00:17:53.023 "num_base_bdevs_operational": 1, 00:17:53.023 "base_bdevs_list": [ 00:17:53.023 { 00:17:53.023 "name": null, 00:17:53.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.023 "is_configured": false, 00:17:53.023 "data_offset": 0, 00:17:53.023 "data_size": 7936 00:17:53.023 }, 00:17:53.023 { 00:17:53.023 "name": "BaseBdev2", 00:17:53.023 "uuid": "4a3de2d9-e471-54b4-955c-21dd9aa5da51", 00:17:53.023 "is_configured": true, 00:17:53.023 "data_offset": 256, 00:17:53.023 "data_size": 7936 00:17:53.023 } 00:17:53.023 ] 00:17:53.023 }' 00:17:53.023 09:55:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:53.023 09:55:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:53.023 09:55:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:53.023 09:55:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:53.023 09:55:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87647 00:17:53.023 09:55:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87647 ']' 00:17:53.023 09:55:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87647 00:17:53.023 09:55:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:53.023 09:55:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.023 09:55:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87647 00:17:53.023 09:55:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:53.023 09:55:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:53.023 killing process with pid 87647 00:17:53.023 Received shutdown signal, test time was about 60.000000 seconds 00:17:53.023 00:17:53.023 Latency(us) 00:17:53.023 [2024-12-06T09:55:18.296Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.023 [2024-12-06T09:55:18.296Z] =================================================================================================================== 00:17:53.023 [2024-12-06T09:55:18.296Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:53.023 09:55:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87647' 00:17:53.024 09:55:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87647 00:17:53.024 [2024-12-06 09:55:18.194956] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:53.024 [2024-12-06 09:55:18.195091] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:53.024 09:55:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87647 00:17:53.024 [2024-12-06 09:55:18.195145] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:53.024 [2024-12-06 09:55:18.195159] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:53.284 [2024-12-06 09:55:18.533755] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:54.673 09:55:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:17:54.673 00:17:54.673 real 0m19.804s 00:17:54.673 user 0m25.587s 00:17:54.673 sys 0m2.704s 00:17:54.673 09:55:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:54.673 09:55:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.673 ************************************ 00:17:54.673 END TEST raid_rebuild_test_sb_md_separate 00:17:54.673 ************************************ 00:17:54.673 09:55:19 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:17:54.673 09:55:19 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:17:54.673 09:55:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:54.673 09:55:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:54.673 09:55:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:54.673 ************************************ 00:17:54.673 START TEST raid_state_function_test_sb_md_interleaved 00:17:54.673 ************************************ 00:17:54.673 09:55:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:54.673 09:55:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:54.673 09:55:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:54.673 09:55:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:54.673 09:55:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:54.673 09:55:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:54.673 09:55:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:54.673 09:55:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:54.673 09:55:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:54.673 09:55:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:54.673 09:55:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:54.673 09:55:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:54.673 09:55:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:54.673 09:55:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:54.673 09:55:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:54.673 09:55:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:54.673 09:55:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:54.673 09:55:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:54.673 09:55:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:54.673 09:55:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:54.673 09:55:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:54.673 09:55:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:54.673 09:55:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:54.673 09:55:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88337 00:17:54.673 09:55:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:54.673 09:55:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88337' 00:17:54.673 Process raid pid: 88337 00:17:54.673 09:55:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88337 00:17:54.673 09:55:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88337 ']' 00:17:54.673 09:55:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.673 09:55:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.673 09:55:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.673 09:55:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.673 09:55:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.673 [2024-12-06 09:55:19.884465] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:17:54.673 [2024-12-06 09:55:19.884582] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.933 [2024-12-06 09:55:20.065840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.933 [2024-12-06 09:55:20.198109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.193 [2024-12-06 09:55:20.436764] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:55.193 [2024-12-06 09:55:20.436811] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:55.453 09:55:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:55.453 09:55:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:17:55.453 09:55:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:55.453 09:55:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.453 09:55:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.453 [2024-12-06 09:55:20.707198] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:55.453 [2024-12-06 09:55:20.707261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:55.453 [2024-12-06 09:55:20.707270] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:55.453 [2024-12-06 09:55:20.707280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:55.453 09:55:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.453 09:55:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:55.453 09:55:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:55.453 09:55:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:55.453 09:55:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:55.453 09:55:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:55.453 09:55:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:55.453 09:55:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.453 09:55:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.453 09:55:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.453 09:55:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.453 09:55:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.453 09:55:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.453 09:55:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.453 09:55:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.713 09:55:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.713 09:55:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.713 "name": "Existed_Raid", 00:17:55.713 "uuid": "6d9f58be-6356-4040-afc1-a6c42b52e074", 00:17:55.713 "strip_size_kb": 0, 00:17:55.713 "state": "configuring", 00:17:55.713 "raid_level": "raid1", 00:17:55.713 "superblock": true, 00:17:55.713 "num_base_bdevs": 2, 00:17:55.713 "num_base_bdevs_discovered": 0, 00:17:55.713 "num_base_bdevs_operational": 2, 00:17:55.713 "base_bdevs_list": [ 00:17:55.713 { 00:17:55.713 "name": "BaseBdev1", 00:17:55.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.713 "is_configured": false, 00:17:55.713 "data_offset": 0, 00:17:55.713 "data_size": 0 00:17:55.713 }, 00:17:55.713 { 00:17:55.713 "name": "BaseBdev2", 00:17:55.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.713 "is_configured": false, 00:17:55.713 "data_offset": 0, 00:17:55.713 "data_size": 0 00:17:55.713 } 00:17:55.713 ] 00:17:55.713 }' 00:17:55.713 09:55:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.713 09:55:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.972 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:55.972 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.972 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.973 [2024-12-06 09:55:21.130378] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:55.973 [2024-12-06 09:55:21.130486] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:55.973 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.973 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:55.973 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.973 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.973 [2024-12-06 09:55:21.142348] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:55.973 [2024-12-06 09:55:21.142425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:55.973 [2024-12-06 09:55:21.142450] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:55.973 [2024-12-06 09:55:21.142475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:55.973 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.973 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:17:55.973 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.973 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.973 [2024-12-06 09:55:21.196563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:55.973 BaseBdev1 00:17:55.973 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.973 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:55.973 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:55.973 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:55.973 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:17:55.973 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:55.973 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:55.973 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:55.973 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.973 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.973 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.973 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:55.973 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.973 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.973 [ 00:17:55.973 { 00:17:55.973 "name": "BaseBdev1", 00:17:55.973 "aliases": [ 00:17:55.973 "fc515963-be46-4285-9de3-e5490844ed7e" 00:17:55.973 ], 00:17:55.973 "product_name": "Malloc disk", 00:17:55.973 "block_size": 4128, 00:17:55.973 "num_blocks": 8192, 00:17:55.973 "uuid": "fc515963-be46-4285-9de3-e5490844ed7e", 00:17:55.973 "md_size": 32, 00:17:55.973 "md_interleave": true, 00:17:55.973 "dif_type": 0, 00:17:55.973 "assigned_rate_limits": { 00:17:55.973 "rw_ios_per_sec": 0, 00:17:55.973 "rw_mbytes_per_sec": 0, 00:17:55.973 "r_mbytes_per_sec": 0, 00:17:55.973 "w_mbytes_per_sec": 0 00:17:55.973 }, 00:17:55.973 "claimed": true, 00:17:55.973 "claim_type": "exclusive_write", 00:17:55.973 "zoned": false, 00:17:55.973 "supported_io_types": { 00:17:55.973 "read": true, 00:17:55.973 "write": true, 00:17:55.973 "unmap": true, 00:17:55.973 "flush": true, 00:17:55.973 "reset": true, 00:17:55.973 "nvme_admin": false, 00:17:55.973 "nvme_io": false, 00:17:55.973 "nvme_io_md": false, 00:17:55.973 "write_zeroes": true, 00:17:55.973 "zcopy": true, 00:17:55.973 "get_zone_info": false, 00:17:55.973 "zone_management": false, 00:17:55.973 "zone_append": false, 00:17:55.973 "compare": false, 00:17:55.973 "compare_and_write": false, 00:17:55.973 "abort": true, 00:17:55.973 "seek_hole": false, 00:17:55.973 "seek_data": false, 00:17:55.973 "copy": true, 00:17:55.973 "nvme_iov_md": false 00:17:55.973 }, 00:17:55.973 "memory_domains": [ 00:17:55.973 { 00:17:55.973 "dma_device_id": "system", 00:17:55.973 "dma_device_type": 1 00:17:55.973 }, 00:17:55.973 { 00:17:55.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.973 "dma_device_type": 2 00:17:55.973 } 00:17:55.973 ], 00:17:55.973 "driver_specific": {} 00:17:55.973 } 00:17:55.973 ] 00:17:55.973 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.973 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:17:55.973 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:55.973 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:55.973 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:55.973 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:55.973 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:55.973 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:55.973 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.973 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.973 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.973 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.232 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.232 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.232 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.232 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.232 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.232 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.232 "name": "Existed_Raid", 00:17:56.232 "uuid": "580da554-0df3-44ff-9059-0d5a438703fe", 00:17:56.232 "strip_size_kb": 0, 00:17:56.232 "state": "configuring", 00:17:56.232 "raid_level": "raid1", 00:17:56.232 "superblock": true, 00:17:56.232 "num_base_bdevs": 2, 00:17:56.232 "num_base_bdevs_discovered": 1, 00:17:56.232 "num_base_bdevs_operational": 2, 00:17:56.232 "base_bdevs_list": [ 00:17:56.232 { 00:17:56.232 "name": "BaseBdev1", 00:17:56.232 "uuid": "fc515963-be46-4285-9de3-e5490844ed7e", 00:17:56.232 "is_configured": true, 00:17:56.232 "data_offset": 256, 00:17:56.232 "data_size": 7936 00:17:56.232 }, 00:17:56.232 { 00:17:56.232 "name": "BaseBdev2", 00:17:56.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.232 "is_configured": false, 00:17:56.232 "data_offset": 0, 00:17:56.232 "data_size": 0 00:17:56.232 } 00:17:56.232 ] 00:17:56.232 }' 00:17:56.232 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.232 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.492 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:56.492 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.492 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.492 [2024-12-06 09:55:21.671836] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:56.492 [2024-12-06 09:55:21.671881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:56.492 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.492 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:56.492 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.492 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.492 [2024-12-06 09:55:21.683871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:56.492 [2024-12-06 09:55:21.685931] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:56.492 [2024-12-06 09:55:21.686003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:56.492 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.492 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:56.492 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:56.492 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:56.492 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:56.492 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:56.492 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.492 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.492 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:56.492 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.493 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.493 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.493 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.493 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.493 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.493 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.493 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.493 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.493 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.493 "name": "Existed_Raid", 00:17:56.493 "uuid": "b427b8ba-bdec-4a33-a059-8e8e52b3ec82", 00:17:56.493 "strip_size_kb": 0, 00:17:56.493 "state": "configuring", 00:17:56.493 "raid_level": "raid1", 00:17:56.493 "superblock": true, 00:17:56.493 "num_base_bdevs": 2, 00:17:56.493 "num_base_bdevs_discovered": 1, 00:17:56.493 "num_base_bdevs_operational": 2, 00:17:56.493 "base_bdevs_list": [ 00:17:56.493 { 00:17:56.493 "name": "BaseBdev1", 00:17:56.493 "uuid": "fc515963-be46-4285-9de3-e5490844ed7e", 00:17:56.493 "is_configured": true, 00:17:56.493 "data_offset": 256, 00:17:56.493 "data_size": 7936 00:17:56.493 }, 00:17:56.493 { 00:17:56.493 "name": "BaseBdev2", 00:17:56.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.493 "is_configured": false, 00:17:56.493 "data_offset": 0, 00:17:56.493 "data_size": 0 00:17:56.493 } 00:17:56.493 ] 00:17:56.493 }' 00:17:56.493 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.493 09:55:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.090 [2024-12-06 09:55:22.158316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:57.090 [2024-12-06 09:55:22.158548] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:57.090 [2024-12-06 09:55:22.158562] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:57.090 [2024-12-06 09:55:22.158649] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:57.090 [2024-12-06 09:55:22.158724] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:57.090 [2024-12-06 09:55:22.158735] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:57.090 [2024-12-06 09:55:22.158802] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.090 BaseBdev2 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.090 [ 00:17:57.090 { 00:17:57.090 "name": "BaseBdev2", 00:17:57.090 "aliases": [ 00:17:57.090 "aa7923a7-2ef3-4816-9270-3bee5f53b206" 00:17:57.090 ], 00:17:57.090 "product_name": "Malloc disk", 00:17:57.090 "block_size": 4128, 00:17:57.090 "num_blocks": 8192, 00:17:57.090 "uuid": "aa7923a7-2ef3-4816-9270-3bee5f53b206", 00:17:57.090 "md_size": 32, 00:17:57.090 "md_interleave": true, 00:17:57.090 "dif_type": 0, 00:17:57.090 "assigned_rate_limits": { 00:17:57.090 "rw_ios_per_sec": 0, 00:17:57.090 "rw_mbytes_per_sec": 0, 00:17:57.090 "r_mbytes_per_sec": 0, 00:17:57.090 "w_mbytes_per_sec": 0 00:17:57.090 }, 00:17:57.090 "claimed": true, 00:17:57.090 "claim_type": "exclusive_write", 00:17:57.090 "zoned": false, 00:17:57.090 "supported_io_types": { 00:17:57.090 "read": true, 00:17:57.090 "write": true, 00:17:57.090 "unmap": true, 00:17:57.090 "flush": true, 00:17:57.090 "reset": true, 00:17:57.090 "nvme_admin": false, 00:17:57.090 "nvme_io": false, 00:17:57.090 "nvme_io_md": false, 00:17:57.090 "write_zeroes": true, 00:17:57.090 "zcopy": true, 00:17:57.090 "get_zone_info": false, 00:17:57.090 "zone_management": false, 00:17:57.090 "zone_append": false, 00:17:57.090 "compare": false, 00:17:57.090 "compare_and_write": false, 00:17:57.090 "abort": true, 00:17:57.090 "seek_hole": false, 00:17:57.090 "seek_data": false, 00:17:57.090 "copy": true, 00:17:57.090 "nvme_iov_md": false 00:17:57.090 }, 00:17:57.090 "memory_domains": [ 00:17:57.090 { 00:17:57.090 "dma_device_id": "system", 00:17:57.090 "dma_device_type": 1 00:17:57.090 }, 00:17:57.090 { 00:17:57.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.090 "dma_device_type": 2 00:17:57.090 } 00:17:57.090 ], 00:17:57.090 "driver_specific": {} 00:17:57.090 } 00:17:57.090 ] 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.090 "name": "Existed_Raid", 00:17:57.090 "uuid": "b427b8ba-bdec-4a33-a059-8e8e52b3ec82", 00:17:57.090 "strip_size_kb": 0, 00:17:57.090 "state": "online", 00:17:57.090 "raid_level": "raid1", 00:17:57.090 "superblock": true, 00:17:57.090 "num_base_bdevs": 2, 00:17:57.090 "num_base_bdevs_discovered": 2, 00:17:57.090 "num_base_bdevs_operational": 2, 00:17:57.090 "base_bdevs_list": [ 00:17:57.090 { 00:17:57.090 "name": "BaseBdev1", 00:17:57.090 "uuid": "fc515963-be46-4285-9de3-e5490844ed7e", 00:17:57.090 "is_configured": true, 00:17:57.090 "data_offset": 256, 00:17:57.090 "data_size": 7936 00:17:57.090 }, 00:17:57.090 { 00:17:57.090 "name": "BaseBdev2", 00:17:57.090 "uuid": "aa7923a7-2ef3-4816-9270-3bee5f53b206", 00:17:57.090 "is_configured": true, 00:17:57.090 "data_offset": 256, 00:17:57.090 "data_size": 7936 00:17:57.090 } 00:17:57.090 ] 00:17:57.090 }' 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.090 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.660 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:57.660 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:57.660 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:57.660 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:57.660 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:57.660 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:57.660 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:57.660 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:57.660 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.660 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.660 [2024-12-06 09:55:22.649783] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:57.660 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.660 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:57.660 "name": "Existed_Raid", 00:17:57.660 "aliases": [ 00:17:57.660 "b427b8ba-bdec-4a33-a059-8e8e52b3ec82" 00:17:57.660 ], 00:17:57.660 "product_name": "Raid Volume", 00:17:57.660 "block_size": 4128, 00:17:57.660 "num_blocks": 7936, 00:17:57.660 "uuid": "b427b8ba-bdec-4a33-a059-8e8e52b3ec82", 00:17:57.660 "md_size": 32, 00:17:57.660 "md_interleave": true, 00:17:57.660 "dif_type": 0, 00:17:57.660 "assigned_rate_limits": { 00:17:57.660 "rw_ios_per_sec": 0, 00:17:57.660 "rw_mbytes_per_sec": 0, 00:17:57.660 "r_mbytes_per_sec": 0, 00:17:57.660 "w_mbytes_per_sec": 0 00:17:57.660 }, 00:17:57.660 "claimed": false, 00:17:57.660 "zoned": false, 00:17:57.660 "supported_io_types": { 00:17:57.660 "read": true, 00:17:57.660 "write": true, 00:17:57.660 "unmap": false, 00:17:57.660 "flush": false, 00:17:57.660 "reset": true, 00:17:57.660 "nvme_admin": false, 00:17:57.660 "nvme_io": false, 00:17:57.660 "nvme_io_md": false, 00:17:57.660 "write_zeroes": true, 00:17:57.660 "zcopy": false, 00:17:57.660 "get_zone_info": false, 00:17:57.660 "zone_management": false, 00:17:57.660 "zone_append": false, 00:17:57.660 "compare": false, 00:17:57.660 "compare_and_write": false, 00:17:57.660 "abort": false, 00:17:57.660 "seek_hole": false, 00:17:57.660 "seek_data": false, 00:17:57.660 "copy": false, 00:17:57.660 "nvme_iov_md": false 00:17:57.660 }, 00:17:57.660 "memory_domains": [ 00:17:57.660 { 00:17:57.660 "dma_device_id": "system", 00:17:57.660 "dma_device_type": 1 00:17:57.660 }, 00:17:57.660 { 00:17:57.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.660 "dma_device_type": 2 00:17:57.660 }, 00:17:57.660 { 00:17:57.660 "dma_device_id": "system", 00:17:57.660 "dma_device_type": 1 00:17:57.660 }, 00:17:57.660 { 00:17:57.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.660 "dma_device_type": 2 00:17:57.660 } 00:17:57.660 ], 00:17:57.660 "driver_specific": { 00:17:57.660 "raid": { 00:17:57.660 "uuid": "b427b8ba-bdec-4a33-a059-8e8e52b3ec82", 00:17:57.660 "strip_size_kb": 0, 00:17:57.660 "state": "online", 00:17:57.660 "raid_level": "raid1", 00:17:57.660 "superblock": true, 00:17:57.660 "num_base_bdevs": 2, 00:17:57.660 "num_base_bdevs_discovered": 2, 00:17:57.660 "num_base_bdevs_operational": 2, 00:17:57.660 "base_bdevs_list": [ 00:17:57.660 { 00:17:57.660 "name": "BaseBdev1", 00:17:57.661 "uuid": "fc515963-be46-4285-9de3-e5490844ed7e", 00:17:57.661 "is_configured": true, 00:17:57.661 "data_offset": 256, 00:17:57.661 "data_size": 7936 00:17:57.661 }, 00:17:57.661 { 00:17:57.661 "name": "BaseBdev2", 00:17:57.661 "uuid": "aa7923a7-2ef3-4816-9270-3bee5f53b206", 00:17:57.661 "is_configured": true, 00:17:57.661 "data_offset": 256, 00:17:57.661 "data_size": 7936 00:17:57.661 } 00:17:57.661 ] 00:17:57.661 } 00:17:57.661 } 00:17:57.661 }' 00:17:57.661 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:57.661 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:57.661 BaseBdev2' 00:17:57.661 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.661 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:57.661 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:57.661 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.661 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:57.661 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.661 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.661 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.661 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:57.661 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:57.661 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:57.661 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:57.661 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.661 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.661 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.661 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.661 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:57.661 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:57.661 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:57.661 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.661 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.661 [2024-12-06 09:55:22.889171] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:57.920 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.920 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:57.920 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:57.920 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:57.920 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:17:57.920 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:57.920 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:57.920 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:57.920 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.920 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:57.920 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:57.920 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:57.920 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.920 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.920 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.920 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.920 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.920 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.920 09:55:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.920 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.920 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.920 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.920 "name": "Existed_Raid", 00:17:57.920 "uuid": "b427b8ba-bdec-4a33-a059-8e8e52b3ec82", 00:17:57.920 "strip_size_kb": 0, 00:17:57.920 "state": "online", 00:17:57.920 "raid_level": "raid1", 00:17:57.920 "superblock": true, 00:17:57.920 "num_base_bdevs": 2, 00:17:57.920 "num_base_bdevs_discovered": 1, 00:17:57.920 "num_base_bdevs_operational": 1, 00:17:57.920 "base_bdevs_list": [ 00:17:57.920 { 00:17:57.920 "name": null, 00:17:57.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.920 "is_configured": false, 00:17:57.920 "data_offset": 0, 00:17:57.920 "data_size": 7936 00:17:57.920 }, 00:17:57.920 { 00:17:57.920 "name": "BaseBdev2", 00:17:57.920 "uuid": "aa7923a7-2ef3-4816-9270-3bee5f53b206", 00:17:57.920 "is_configured": true, 00:17:57.920 "data_offset": 256, 00:17:57.920 "data_size": 7936 00:17:57.920 } 00:17:57.920 ] 00:17:57.920 }' 00:17:57.920 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.921 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.180 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:58.180 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:58.180 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.180 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.180 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.180 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:58.180 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.439 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:58.439 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:58.439 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:58.439 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.439 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.439 [2024-12-06 09:55:23.488481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:58.439 [2024-12-06 09:55:23.488614] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:58.439 [2024-12-06 09:55:23.589560] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:58.439 [2024-12-06 09:55:23.589615] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:58.439 [2024-12-06 09:55:23.589628] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:58.439 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.439 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:58.439 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:58.439 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.439 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.439 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:58.439 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.439 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.439 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:58.439 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:58.439 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:58.439 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88337 00:17:58.439 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88337 ']' 00:17:58.439 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88337 00:17:58.439 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:17:58.439 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.439 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88337 00:17:58.439 killing process with pid 88337 00:17:58.439 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:58.439 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:58.439 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88337' 00:17:58.439 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88337 00:17:58.439 [2024-12-06 09:55:23.685468] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:58.439 09:55:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88337 00:17:58.439 [2024-12-06 09:55:23.703003] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:59.818 09:55:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:17:59.818 00:17:59.818 real 0m5.117s 00:17:59.818 user 0m7.185s 00:17:59.818 sys 0m0.972s 00:17:59.818 09:55:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:59.818 09:55:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.818 ************************************ 00:17:59.818 END TEST raid_state_function_test_sb_md_interleaved 00:17:59.818 ************************************ 00:17:59.818 09:55:24 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:17:59.818 09:55:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:59.818 09:55:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:59.818 09:55:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:59.818 ************************************ 00:17:59.818 START TEST raid_superblock_test_md_interleaved 00:17:59.818 ************************************ 00:17:59.818 09:55:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:59.818 09:55:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:59.818 09:55:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:59.818 09:55:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:59.818 09:55:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:59.818 09:55:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:59.818 09:55:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:59.818 09:55:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:59.818 09:55:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:59.818 09:55:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:59.818 09:55:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:59.818 09:55:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:59.818 09:55:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:59.818 09:55:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:59.818 09:55:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:59.818 09:55:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:59.818 09:55:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88585 00:17:59.818 09:55:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:59.818 09:55:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88585 00:17:59.818 09:55:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88585 ']' 00:17:59.818 09:55:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.818 09:55:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.818 09:55:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.818 09:55:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.818 09:55:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.818 [2024-12-06 09:55:25.068516] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:17:59.818 [2024-12-06 09:55:25.068635] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88585 ] 00:18:00.076 [2024-12-06 09:55:25.243548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.335 [2024-12-06 09:55:25.372394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.335 [2024-12-06 09:55:25.606023] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:00.335 [2024-12-06 09:55:25.606063] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:00.903 09:55:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:00.903 09:55:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:00.903 09:55:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:00.903 09:55:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:00.903 09:55:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:00.903 09:55:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:00.903 09:55:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:00.903 09:55:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:00.903 09:55:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:00.903 09:55:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:00.903 09:55:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:18:00.903 09:55:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.903 09:55:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.903 malloc1 00:18:00.903 09:55:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.903 09:55:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:00.904 09:55:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.904 09:55:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.904 [2024-12-06 09:55:25.936528] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:00.904 [2024-12-06 09:55:25.936595] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.904 [2024-12-06 09:55:25.936618] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:00.904 [2024-12-06 09:55:25.936627] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.904 [2024-12-06 09:55:25.938633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.904 [2024-12-06 09:55:25.938667] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:00.904 pt1 00:18:00.904 09:55:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.904 09:55:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:00.904 09:55:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:00.904 09:55:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:00.904 09:55:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:00.904 09:55:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:00.904 09:55:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:00.904 09:55:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:00.904 09:55:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:00.904 09:55:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:18:00.904 09:55:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.904 09:55:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.904 malloc2 00:18:00.904 09:55:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.904 09:55:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:00.904 09:55:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.904 09:55:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.904 [2024-12-06 09:55:26.002285] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:00.904 [2024-12-06 09:55:26.002336] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.904 [2024-12-06 09:55:26.002358] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:00.904 [2024-12-06 09:55:26.002366] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.904 [2024-12-06 09:55:26.004359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.904 [2024-12-06 09:55:26.004391] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:00.904 pt2 00:18:00.904 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.904 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:00.904 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:00.904 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:00.904 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.904 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.904 [2024-12-06 09:55:26.014309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:00.904 [2024-12-06 09:55:26.016225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:00.904 [2024-12-06 09:55:26.016411] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:00.904 [2024-12-06 09:55:26.016424] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:00.904 [2024-12-06 09:55:26.016497] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:00.904 [2024-12-06 09:55:26.016563] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:00.904 [2024-12-06 09:55:26.016577] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:00.904 [2024-12-06 09:55:26.016642] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.904 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.904 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:00.904 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.904 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.904 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.904 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.904 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:00.904 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.904 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.904 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.904 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.904 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.904 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.904 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.904 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.904 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.904 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.904 "name": "raid_bdev1", 00:18:00.904 "uuid": "f3082749-811b-4a3d-b9fd-45e31c2ad4ae", 00:18:00.904 "strip_size_kb": 0, 00:18:00.904 "state": "online", 00:18:00.904 "raid_level": "raid1", 00:18:00.904 "superblock": true, 00:18:00.904 "num_base_bdevs": 2, 00:18:00.904 "num_base_bdevs_discovered": 2, 00:18:00.904 "num_base_bdevs_operational": 2, 00:18:00.904 "base_bdevs_list": [ 00:18:00.904 { 00:18:00.904 "name": "pt1", 00:18:00.904 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:00.904 "is_configured": true, 00:18:00.904 "data_offset": 256, 00:18:00.904 "data_size": 7936 00:18:00.904 }, 00:18:00.904 { 00:18:00.904 "name": "pt2", 00:18:00.904 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:00.904 "is_configured": true, 00:18:00.904 "data_offset": 256, 00:18:00.904 "data_size": 7936 00:18:00.904 } 00:18:00.904 ] 00:18:00.904 }' 00:18:00.904 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.904 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.474 [2024-12-06 09:55:26.497687] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:01.474 "name": "raid_bdev1", 00:18:01.474 "aliases": [ 00:18:01.474 "f3082749-811b-4a3d-b9fd-45e31c2ad4ae" 00:18:01.474 ], 00:18:01.474 "product_name": "Raid Volume", 00:18:01.474 "block_size": 4128, 00:18:01.474 "num_blocks": 7936, 00:18:01.474 "uuid": "f3082749-811b-4a3d-b9fd-45e31c2ad4ae", 00:18:01.474 "md_size": 32, 00:18:01.474 "md_interleave": true, 00:18:01.474 "dif_type": 0, 00:18:01.474 "assigned_rate_limits": { 00:18:01.474 "rw_ios_per_sec": 0, 00:18:01.474 "rw_mbytes_per_sec": 0, 00:18:01.474 "r_mbytes_per_sec": 0, 00:18:01.474 "w_mbytes_per_sec": 0 00:18:01.474 }, 00:18:01.474 "claimed": false, 00:18:01.474 "zoned": false, 00:18:01.474 "supported_io_types": { 00:18:01.474 "read": true, 00:18:01.474 "write": true, 00:18:01.474 "unmap": false, 00:18:01.474 "flush": false, 00:18:01.474 "reset": true, 00:18:01.474 "nvme_admin": false, 00:18:01.474 "nvme_io": false, 00:18:01.474 "nvme_io_md": false, 00:18:01.474 "write_zeroes": true, 00:18:01.474 "zcopy": false, 00:18:01.474 "get_zone_info": false, 00:18:01.474 "zone_management": false, 00:18:01.474 "zone_append": false, 00:18:01.474 "compare": false, 00:18:01.474 "compare_and_write": false, 00:18:01.474 "abort": false, 00:18:01.474 "seek_hole": false, 00:18:01.474 "seek_data": false, 00:18:01.474 "copy": false, 00:18:01.474 "nvme_iov_md": false 00:18:01.474 }, 00:18:01.474 "memory_domains": [ 00:18:01.474 { 00:18:01.474 "dma_device_id": "system", 00:18:01.474 "dma_device_type": 1 00:18:01.474 }, 00:18:01.474 { 00:18:01.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.474 "dma_device_type": 2 00:18:01.474 }, 00:18:01.474 { 00:18:01.474 "dma_device_id": "system", 00:18:01.474 "dma_device_type": 1 00:18:01.474 }, 00:18:01.474 { 00:18:01.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.474 "dma_device_type": 2 00:18:01.474 } 00:18:01.474 ], 00:18:01.474 "driver_specific": { 00:18:01.474 "raid": { 00:18:01.474 "uuid": "f3082749-811b-4a3d-b9fd-45e31c2ad4ae", 00:18:01.474 "strip_size_kb": 0, 00:18:01.474 "state": "online", 00:18:01.474 "raid_level": "raid1", 00:18:01.474 "superblock": true, 00:18:01.474 "num_base_bdevs": 2, 00:18:01.474 "num_base_bdevs_discovered": 2, 00:18:01.474 "num_base_bdevs_operational": 2, 00:18:01.474 "base_bdevs_list": [ 00:18:01.474 { 00:18:01.474 "name": "pt1", 00:18:01.474 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:01.474 "is_configured": true, 00:18:01.474 "data_offset": 256, 00:18:01.474 "data_size": 7936 00:18:01.474 }, 00:18:01.474 { 00:18:01.474 "name": "pt2", 00:18:01.474 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:01.474 "is_configured": true, 00:18:01.474 "data_offset": 256, 00:18:01.474 "data_size": 7936 00:18:01.474 } 00:18:01.474 ] 00:18:01.474 } 00:18:01.474 } 00:18:01.474 }' 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:01.474 pt2' 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.474 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.735 [2024-12-06 09:55:26.749300] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f3082749-811b-4a3d-b9fd-45e31c2ad4ae 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z f3082749-811b-4a3d-b9fd-45e31c2ad4ae ']' 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.735 [2024-12-06 09:55:26.792920] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:01.735 [2024-12-06 09:55:26.792945] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:01.735 [2024-12-06 09:55:26.793023] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:01.735 [2024-12-06 09:55:26.793080] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:01.735 [2024-12-06 09:55:26.793096] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.735 [2024-12-06 09:55:26.912731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:01.735 [2024-12-06 09:55:26.914746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:01.735 [2024-12-06 09:55:26.914819] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:01.735 [2024-12-06 09:55:26.914867] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:01.735 [2024-12-06 09:55:26.914880] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:01.735 [2024-12-06 09:55:26.914889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:01.735 request: 00:18:01.735 { 00:18:01.735 "name": "raid_bdev1", 00:18:01.735 "raid_level": "raid1", 00:18:01.735 "base_bdevs": [ 00:18:01.735 "malloc1", 00:18:01.735 "malloc2" 00:18:01.735 ], 00:18:01.735 "superblock": false, 00:18:01.735 "method": "bdev_raid_create", 00:18:01.735 "req_id": 1 00:18:01.735 } 00:18:01.735 Got JSON-RPC error response 00:18:01.735 response: 00:18:01.735 { 00:18:01.735 "code": -17, 00:18:01.735 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:01.735 } 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.735 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.735 [2024-12-06 09:55:26.976604] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:01.735 [2024-12-06 09:55:26.976729] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.736 [2024-12-06 09:55:26.976763] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:01.736 [2024-12-06 09:55:26.976777] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.736 [2024-12-06 09:55:26.978875] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.736 [2024-12-06 09:55:26.978909] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:01.736 [2024-12-06 09:55:26.978952] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:01.736 [2024-12-06 09:55:26.978999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:01.736 pt1 00:18:01.736 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.736 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:01.736 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.736 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:01.736 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:01.736 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:01.736 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:01.736 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.736 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.736 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.736 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.736 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.736 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.736 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.736 09:55:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.996 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.996 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.996 "name": "raid_bdev1", 00:18:01.996 "uuid": "f3082749-811b-4a3d-b9fd-45e31c2ad4ae", 00:18:01.996 "strip_size_kb": 0, 00:18:01.996 "state": "configuring", 00:18:01.996 "raid_level": "raid1", 00:18:01.996 "superblock": true, 00:18:01.996 "num_base_bdevs": 2, 00:18:01.996 "num_base_bdevs_discovered": 1, 00:18:01.996 "num_base_bdevs_operational": 2, 00:18:01.996 "base_bdevs_list": [ 00:18:01.996 { 00:18:01.996 "name": "pt1", 00:18:01.996 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:01.996 "is_configured": true, 00:18:01.996 "data_offset": 256, 00:18:01.996 "data_size": 7936 00:18:01.996 }, 00:18:01.996 { 00:18:01.996 "name": null, 00:18:01.996 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:01.996 "is_configured": false, 00:18:01.996 "data_offset": 256, 00:18:01.996 "data_size": 7936 00:18:01.996 } 00:18:01.996 ] 00:18:01.996 }' 00:18:01.996 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.996 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.257 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:02.257 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:02.257 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:02.257 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:02.257 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.257 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.257 [2024-12-06 09:55:27.380062] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:02.257 [2024-12-06 09:55:27.380199] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.257 [2024-12-06 09:55:27.380238] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:02.257 [2024-12-06 09:55:27.380271] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.257 [2024-12-06 09:55:27.380435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.257 [2024-12-06 09:55:27.380484] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:02.257 [2024-12-06 09:55:27.380550] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:02.257 [2024-12-06 09:55:27.380595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:02.257 [2024-12-06 09:55:27.380696] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:02.257 [2024-12-06 09:55:27.380733] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:02.257 [2024-12-06 09:55:27.380824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:02.257 [2024-12-06 09:55:27.380921] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:02.257 [2024-12-06 09:55:27.380955] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:02.257 [2024-12-06 09:55:27.381051] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.257 pt2 00:18:02.257 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.257 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:02.257 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:02.257 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:02.257 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.257 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.257 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.257 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.257 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:02.257 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.257 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.257 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.257 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.257 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.257 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.257 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.257 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.257 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.257 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.257 "name": "raid_bdev1", 00:18:02.257 "uuid": "f3082749-811b-4a3d-b9fd-45e31c2ad4ae", 00:18:02.257 "strip_size_kb": 0, 00:18:02.257 "state": "online", 00:18:02.257 "raid_level": "raid1", 00:18:02.257 "superblock": true, 00:18:02.257 "num_base_bdevs": 2, 00:18:02.257 "num_base_bdevs_discovered": 2, 00:18:02.257 "num_base_bdevs_operational": 2, 00:18:02.257 "base_bdevs_list": [ 00:18:02.257 { 00:18:02.257 "name": "pt1", 00:18:02.257 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:02.257 "is_configured": true, 00:18:02.257 "data_offset": 256, 00:18:02.257 "data_size": 7936 00:18:02.257 }, 00:18:02.257 { 00:18:02.257 "name": "pt2", 00:18:02.257 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:02.257 "is_configured": true, 00:18:02.257 "data_offset": 256, 00:18:02.257 "data_size": 7936 00:18:02.257 } 00:18:02.257 ] 00:18:02.257 }' 00:18:02.257 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.257 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.827 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:02.827 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:02.827 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:02.827 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:02.827 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:02.827 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:02.827 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:02.827 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:02.827 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.827 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.827 [2024-12-06 09:55:27.827634] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:02.827 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.827 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:02.827 "name": "raid_bdev1", 00:18:02.827 "aliases": [ 00:18:02.827 "f3082749-811b-4a3d-b9fd-45e31c2ad4ae" 00:18:02.827 ], 00:18:02.827 "product_name": "Raid Volume", 00:18:02.827 "block_size": 4128, 00:18:02.827 "num_blocks": 7936, 00:18:02.827 "uuid": "f3082749-811b-4a3d-b9fd-45e31c2ad4ae", 00:18:02.827 "md_size": 32, 00:18:02.828 "md_interleave": true, 00:18:02.828 "dif_type": 0, 00:18:02.828 "assigned_rate_limits": { 00:18:02.828 "rw_ios_per_sec": 0, 00:18:02.828 "rw_mbytes_per_sec": 0, 00:18:02.828 "r_mbytes_per_sec": 0, 00:18:02.828 "w_mbytes_per_sec": 0 00:18:02.828 }, 00:18:02.828 "claimed": false, 00:18:02.828 "zoned": false, 00:18:02.828 "supported_io_types": { 00:18:02.828 "read": true, 00:18:02.828 "write": true, 00:18:02.828 "unmap": false, 00:18:02.828 "flush": false, 00:18:02.828 "reset": true, 00:18:02.828 "nvme_admin": false, 00:18:02.828 "nvme_io": false, 00:18:02.828 "nvme_io_md": false, 00:18:02.828 "write_zeroes": true, 00:18:02.828 "zcopy": false, 00:18:02.828 "get_zone_info": false, 00:18:02.828 "zone_management": false, 00:18:02.828 "zone_append": false, 00:18:02.828 "compare": false, 00:18:02.828 "compare_and_write": false, 00:18:02.828 "abort": false, 00:18:02.828 "seek_hole": false, 00:18:02.828 "seek_data": false, 00:18:02.828 "copy": false, 00:18:02.828 "nvme_iov_md": false 00:18:02.828 }, 00:18:02.828 "memory_domains": [ 00:18:02.828 { 00:18:02.828 "dma_device_id": "system", 00:18:02.828 "dma_device_type": 1 00:18:02.828 }, 00:18:02.828 { 00:18:02.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.828 "dma_device_type": 2 00:18:02.828 }, 00:18:02.828 { 00:18:02.828 "dma_device_id": "system", 00:18:02.828 "dma_device_type": 1 00:18:02.828 }, 00:18:02.828 { 00:18:02.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.828 "dma_device_type": 2 00:18:02.828 } 00:18:02.828 ], 00:18:02.828 "driver_specific": { 00:18:02.828 "raid": { 00:18:02.828 "uuid": "f3082749-811b-4a3d-b9fd-45e31c2ad4ae", 00:18:02.828 "strip_size_kb": 0, 00:18:02.828 "state": "online", 00:18:02.828 "raid_level": "raid1", 00:18:02.828 "superblock": true, 00:18:02.828 "num_base_bdevs": 2, 00:18:02.828 "num_base_bdevs_discovered": 2, 00:18:02.828 "num_base_bdevs_operational": 2, 00:18:02.828 "base_bdevs_list": [ 00:18:02.828 { 00:18:02.828 "name": "pt1", 00:18:02.828 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:02.828 "is_configured": true, 00:18:02.828 "data_offset": 256, 00:18:02.828 "data_size": 7936 00:18:02.828 }, 00:18:02.828 { 00:18:02.828 "name": "pt2", 00:18:02.828 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:02.828 "is_configured": true, 00:18:02.828 "data_offset": 256, 00:18:02.828 "data_size": 7936 00:18:02.828 } 00:18:02.828 ] 00:18:02.828 } 00:18:02.828 } 00:18:02.828 }' 00:18:02.828 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:02.828 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:02.828 pt2' 00:18:02.828 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.828 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:02.828 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:02.828 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:02.828 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.828 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.828 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.828 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.828 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:02.828 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:02.828 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:02.828 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.828 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:02.828 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.828 09:55:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.828 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.828 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:02.828 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:02.828 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:02.828 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.828 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.828 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:02.828 [2024-12-06 09:55:28.047258] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:02.828 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.828 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' f3082749-811b-4a3d-b9fd-45e31c2ad4ae '!=' f3082749-811b-4a3d-b9fd-45e31c2ad4ae ']' 00:18:02.828 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:02.828 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:02.828 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:02.828 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:02.828 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.828 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.088 [2024-12-06 09:55:28.098963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:03.088 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.088 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:03.088 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.088 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.088 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:03.088 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:03.088 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:03.088 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.088 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.088 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.088 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.088 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.088 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.088 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.088 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.088 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.088 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.088 "name": "raid_bdev1", 00:18:03.088 "uuid": "f3082749-811b-4a3d-b9fd-45e31c2ad4ae", 00:18:03.088 "strip_size_kb": 0, 00:18:03.088 "state": "online", 00:18:03.088 "raid_level": "raid1", 00:18:03.088 "superblock": true, 00:18:03.088 "num_base_bdevs": 2, 00:18:03.088 "num_base_bdevs_discovered": 1, 00:18:03.088 "num_base_bdevs_operational": 1, 00:18:03.088 "base_bdevs_list": [ 00:18:03.088 { 00:18:03.088 "name": null, 00:18:03.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.088 "is_configured": false, 00:18:03.088 "data_offset": 0, 00:18:03.088 "data_size": 7936 00:18:03.088 }, 00:18:03.088 { 00:18:03.088 "name": "pt2", 00:18:03.088 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:03.088 "is_configured": true, 00:18:03.088 "data_offset": 256, 00:18:03.088 "data_size": 7936 00:18:03.088 } 00:18:03.088 ] 00:18:03.088 }' 00:18:03.088 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.088 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.347 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:03.347 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.347 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.348 [2024-12-06 09:55:28.550159] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:03.348 [2024-12-06 09:55:28.550182] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:03.348 [2024-12-06 09:55:28.550243] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.348 [2024-12-06 09:55:28.550289] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:03.348 [2024-12-06 09:55:28.550300] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:03.348 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.348 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.348 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.348 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:03.348 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.348 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.348 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:03.348 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:03.348 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:03.348 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:03.348 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:03.348 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.348 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.348 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.348 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:03.607 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:03.607 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:03.607 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:03.607 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:18:03.607 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:03.607 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.607 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.607 [2024-12-06 09:55:28.626029] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:03.607 [2024-12-06 09:55:28.626120] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.607 [2024-12-06 09:55:28.626139] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:03.607 [2024-12-06 09:55:28.626170] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.607 [2024-12-06 09:55:28.628385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.607 [2024-12-06 09:55:28.628423] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:03.607 [2024-12-06 09:55:28.628473] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:03.607 [2024-12-06 09:55:28.628519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:03.607 [2024-12-06 09:55:28.628582] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:03.607 [2024-12-06 09:55:28.628595] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:03.607 [2024-12-06 09:55:28.628686] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:03.607 [2024-12-06 09:55:28.628752] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:03.607 [2024-12-06 09:55:28.628760] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:03.607 [2024-12-06 09:55:28.628817] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.607 pt2 00:18:03.607 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.607 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:03.607 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.607 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.607 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:03.607 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:03.607 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:03.607 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.607 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.607 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.607 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.607 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.607 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.607 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.607 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.607 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.607 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.607 "name": "raid_bdev1", 00:18:03.607 "uuid": "f3082749-811b-4a3d-b9fd-45e31c2ad4ae", 00:18:03.607 "strip_size_kb": 0, 00:18:03.607 "state": "online", 00:18:03.607 "raid_level": "raid1", 00:18:03.607 "superblock": true, 00:18:03.607 "num_base_bdevs": 2, 00:18:03.607 "num_base_bdevs_discovered": 1, 00:18:03.607 "num_base_bdevs_operational": 1, 00:18:03.607 "base_bdevs_list": [ 00:18:03.607 { 00:18:03.607 "name": null, 00:18:03.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.607 "is_configured": false, 00:18:03.607 "data_offset": 256, 00:18:03.607 "data_size": 7936 00:18:03.607 }, 00:18:03.607 { 00:18:03.607 "name": "pt2", 00:18:03.607 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:03.607 "is_configured": true, 00:18:03.607 "data_offset": 256, 00:18:03.607 "data_size": 7936 00:18:03.607 } 00:18:03.607 ] 00:18:03.607 }' 00:18:03.607 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.607 09:55:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.867 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:03.867 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.867 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.867 [2024-12-06 09:55:29.077254] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:03.867 [2024-12-06 09:55:29.077359] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:03.867 [2024-12-06 09:55:29.077448] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.867 [2024-12-06 09:55:29.077519] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:03.867 [2024-12-06 09:55:29.077561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:03.867 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.867 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.867 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:03.867 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.867 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.867 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.867 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:03.867 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:03.867 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:03.867 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:03.867 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.867 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.867 [2024-12-06 09:55:29.137164] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:03.867 [2024-12-06 09:55:29.137217] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.867 [2024-12-06 09:55:29.137236] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:03.867 [2024-12-06 09:55:29.137246] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.127 [2024-12-06 09:55:29.139465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.127 [2024-12-06 09:55:29.139496] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:04.127 [2024-12-06 09:55:29.139554] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:04.127 [2024-12-06 09:55:29.139614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:04.127 [2024-12-06 09:55:29.139709] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:04.127 [2024-12-06 09:55:29.139719] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:04.127 [2024-12-06 09:55:29.139737] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:04.127 [2024-12-06 09:55:29.139800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:04.127 [2024-12-06 09:55:29.139870] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:04.127 [2024-12-06 09:55:29.139879] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:04.127 [2024-12-06 09:55:29.139961] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:04.127 [2024-12-06 09:55:29.140033] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:04.127 [2024-12-06 09:55:29.140044] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:04.127 [2024-12-06 09:55:29.140126] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.127 pt1 00:18:04.127 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.127 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:04.127 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:04.127 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.127 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.127 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:04.127 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:04.127 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:04.127 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.127 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.127 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.127 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.127 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.127 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.127 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.127 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.127 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.127 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.127 "name": "raid_bdev1", 00:18:04.127 "uuid": "f3082749-811b-4a3d-b9fd-45e31c2ad4ae", 00:18:04.127 "strip_size_kb": 0, 00:18:04.127 "state": "online", 00:18:04.127 "raid_level": "raid1", 00:18:04.127 "superblock": true, 00:18:04.127 "num_base_bdevs": 2, 00:18:04.127 "num_base_bdevs_discovered": 1, 00:18:04.127 "num_base_bdevs_operational": 1, 00:18:04.127 "base_bdevs_list": [ 00:18:04.127 { 00:18:04.127 "name": null, 00:18:04.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.127 "is_configured": false, 00:18:04.127 "data_offset": 256, 00:18:04.127 "data_size": 7936 00:18:04.127 }, 00:18:04.127 { 00:18:04.127 "name": "pt2", 00:18:04.127 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:04.127 "is_configured": true, 00:18:04.127 "data_offset": 256, 00:18:04.127 "data_size": 7936 00:18:04.127 } 00:18:04.127 ] 00:18:04.127 }' 00:18:04.127 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.127 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.387 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:04.387 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:04.387 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.387 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.387 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.648 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:04.648 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:04.648 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:04.648 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.648 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.648 [2024-12-06 09:55:29.672429] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:04.648 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.648 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' f3082749-811b-4a3d-b9fd-45e31c2ad4ae '!=' f3082749-811b-4a3d-b9fd-45e31c2ad4ae ']' 00:18:04.648 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88585 00:18:04.648 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88585 ']' 00:18:04.649 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88585 00:18:04.649 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:04.649 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:04.649 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88585 00:18:04.649 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:04.649 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:04.649 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88585' 00:18:04.649 killing process with pid 88585 00:18:04.649 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 88585 00:18:04.649 [2024-12-06 09:55:29.741981] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:04.649 [2024-12-06 09:55:29.742093] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:04.649 [2024-12-06 09:55:29.742165] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:04.649 [2024-12-06 09:55:29.742213] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:04.649 09:55:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 88585 00:18:04.909 [2024-12-06 09:55:29.958527] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:06.293 09:55:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:18:06.293 00:18:06.293 real 0m6.174s 00:18:06.293 user 0m9.182s 00:18:06.293 sys 0m1.195s 00:18:06.293 09:55:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:06.293 09:55:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.293 ************************************ 00:18:06.293 END TEST raid_superblock_test_md_interleaved 00:18:06.293 ************************************ 00:18:06.293 09:55:31 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:18:06.293 09:55:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:06.293 09:55:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:06.293 09:55:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:06.293 ************************************ 00:18:06.293 START TEST raid_rebuild_test_sb_md_interleaved 00:18:06.293 ************************************ 00:18:06.293 09:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:18:06.293 09:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:06.293 09:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:06.293 09:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:06.293 09:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:06.293 09:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:18:06.293 09:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:06.293 09:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:06.293 09:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:06.293 09:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:06.293 09:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:06.293 09:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:06.293 09:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:06.293 09:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:06.293 09:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:06.293 09:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:06.293 09:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:06.293 09:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:06.293 09:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:06.293 09:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:06.293 09:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:06.293 09:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:06.293 09:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:06.293 09:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:06.293 09:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:06.293 09:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=88914 00:18:06.293 09:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 88914 00:18:06.293 09:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:06.293 09:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88914 ']' 00:18:06.293 09:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.293 09:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:06.293 09:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.293 09:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:06.293 09:55:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.293 [2024-12-06 09:55:31.333722] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:18:06.293 [2024-12-06 09:55:31.333900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88914 ] 00:18:06.293 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:06.293 Zero copy mechanism will not be used. 00:18:06.293 [2024-12-06 09:55:31.511315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.552 [2024-12-06 09:55:31.641403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.812 [2024-12-06 09:55:31.862389] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:06.812 [2024-12-06 09:55:31.862536] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.073 BaseBdev1_malloc 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.073 [2024-12-06 09:55:32.183570] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:07.073 [2024-12-06 09:55:32.183647] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.073 [2024-12-06 09:55:32.183670] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:07.073 [2024-12-06 09:55:32.183682] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.073 [2024-12-06 09:55:32.185652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.073 [2024-12-06 09:55:32.185692] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:07.073 BaseBdev1 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.073 BaseBdev2_malloc 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.073 [2024-12-06 09:55:32.239251] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:07.073 [2024-12-06 09:55:32.239306] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.073 [2024-12-06 09:55:32.239325] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:07.073 [2024-12-06 09:55:32.239338] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.073 [2024-12-06 09:55:32.241334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.073 [2024-12-06 09:55:32.241367] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:07.073 BaseBdev2 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.073 spare_malloc 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.073 spare_delay 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.073 [2024-12-06 09:55:32.323737] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:07.073 [2024-12-06 09:55:32.323872] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.073 [2024-12-06 09:55:32.323896] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:07.073 [2024-12-06 09:55:32.323907] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.073 [2024-12-06 09:55:32.326000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.073 [2024-12-06 09:55:32.326036] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:07.073 spare 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.073 [2024-12-06 09:55:32.335770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:07.073 [2024-12-06 09:55:32.337835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:07.073 [2024-12-06 09:55:32.338018] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:07.073 [2024-12-06 09:55:32.338032] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:07.073 [2024-12-06 09:55:32.338105] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:07.073 [2024-12-06 09:55:32.338188] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:07.073 [2024-12-06 09:55:32.338197] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:07.073 [2024-12-06 09:55:32.338278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:07.073 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.074 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.074 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:07.074 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:07.074 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:07.339 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.339 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.339 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.339 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.339 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.339 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.339 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.339 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.339 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.339 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.339 "name": "raid_bdev1", 00:18:07.339 "uuid": "244349b2-4b87-466b-899e-ddecfb8da776", 00:18:07.339 "strip_size_kb": 0, 00:18:07.339 "state": "online", 00:18:07.339 "raid_level": "raid1", 00:18:07.339 "superblock": true, 00:18:07.339 "num_base_bdevs": 2, 00:18:07.339 "num_base_bdevs_discovered": 2, 00:18:07.339 "num_base_bdevs_operational": 2, 00:18:07.339 "base_bdevs_list": [ 00:18:07.339 { 00:18:07.339 "name": "BaseBdev1", 00:18:07.339 "uuid": "97880f4d-6756-54e5-a09b-cd340f4c84d4", 00:18:07.339 "is_configured": true, 00:18:07.339 "data_offset": 256, 00:18:07.339 "data_size": 7936 00:18:07.339 }, 00:18:07.339 { 00:18:07.339 "name": "BaseBdev2", 00:18:07.339 "uuid": "7324f268-cd30-52fc-836d-0f64d2db3b06", 00:18:07.339 "is_configured": true, 00:18:07.339 "data_offset": 256, 00:18:07.339 "data_size": 7936 00:18:07.339 } 00:18:07.339 ] 00:18:07.339 }' 00:18:07.339 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.339 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.600 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:07.600 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:07.600 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.600 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.600 [2024-12-06 09:55:32.763315] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:07.600 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.600 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:07.600 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.600 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.600 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.600 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:07.600 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.600 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:07.600 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:07.600 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:18:07.600 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:07.600 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.600 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.600 [2024-12-06 09:55:32.862825] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:07.600 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.600 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:07.600 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.600 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.600 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:07.600 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:07.600 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:07.600 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.600 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.600 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.600 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.861 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.861 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.861 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.861 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.861 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.861 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.861 "name": "raid_bdev1", 00:18:07.861 "uuid": "244349b2-4b87-466b-899e-ddecfb8da776", 00:18:07.861 "strip_size_kb": 0, 00:18:07.861 "state": "online", 00:18:07.861 "raid_level": "raid1", 00:18:07.861 "superblock": true, 00:18:07.861 "num_base_bdevs": 2, 00:18:07.861 "num_base_bdevs_discovered": 1, 00:18:07.861 "num_base_bdevs_operational": 1, 00:18:07.861 "base_bdevs_list": [ 00:18:07.861 { 00:18:07.861 "name": null, 00:18:07.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.861 "is_configured": false, 00:18:07.861 "data_offset": 0, 00:18:07.861 "data_size": 7936 00:18:07.861 }, 00:18:07.861 { 00:18:07.861 "name": "BaseBdev2", 00:18:07.861 "uuid": "7324f268-cd30-52fc-836d-0f64d2db3b06", 00:18:07.861 "is_configured": true, 00:18:07.861 "data_offset": 256, 00:18:07.861 "data_size": 7936 00:18:07.861 } 00:18:07.861 ] 00:18:07.861 }' 00:18:07.861 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.861 09:55:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.121 09:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:08.121 09:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.121 09:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.121 [2024-12-06 09:55:33.262144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:08.121 [2024-12-06 09:55:33.279550] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:08.121 09:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.121 09:55:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:08.121 [2024-12-06 09:55:33.281656] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:09.069 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:09.069 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.069 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:09.069 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:09.069 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.069 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.069 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.069 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.069 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.069 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.349 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.349 "name": "raid_bdev1", 00:18:09.349 "uuid": "244349b2-4b87-466b-899e-ddecfb8da776", 00:18:09.349 "strip_size_kb": 0, 00:18:09.349 "state": "online", 00:18:09.349 "raid_level": "raid1", 00:18:09.349 "superblock": true, 00:18:09.349 "num_base_bdevs": 2, 00:18:09.349 "num_base_bdevs_discovered": 2, 00:18:09.349 "num_base_bdevs_operational": 2, 00:18:09.349 "process": { 00:18:09.349 "type": "rebuild", 00:18:09.349 "target": "spare", 00:18:09.349 "progress": { 00:18:09.349 "blocks": 2560, 00:18:09.349 "percent": 32 00:18:09.349 } 00:18:09.349 }, 00:18:09.349 "base_bdevs_list": [ 00:18:09.349 { 00:18:09.349 "name": "spare", 00:18:09.349 "uuid": "b04239a2-1063-5fca-b6cb-73a61f5911ab", 00:18:09.349 "is_configured": true, 00:18:09.349 "data_offset": 256, 00:18:09.349 "data_size": 7936 00:18:09.349 }, 00:18:09.349 { 00:18:09.349 "name": "BaseBdev2", 00:18:09.349 "uuid": "7324f268-cd30-52fc-836d-0f64d2db3b06", 00:18:09.349 "is_configured": true, 00:18:09.349 "data_offset": 256, 00:18:09.349 "data_size": 7936 00:18:09.349 } 00:18:09.349 ] 00:18:09.349 }' 00:18:09.349 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.349 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:09.349 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.349 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:09.349 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:09.349 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.349 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.349 [2024-12-06 09:55:34.441672] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:09.349 [2024-12-06 09:55:34.490392] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:09.349 [2024-12-06 09:55:34.490453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.349 [2024-12-06 09:55:34.490469] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:09.349 [2024-12-06 09:55:34.490483] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:09.349 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.349 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:09.349 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.349 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.349 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.349 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.349 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:09.349 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.349 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.349 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.349 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.349 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.349 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.349 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.349 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.349 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.349 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.349 "name": "raid_bdev1", 00:18:09.349 "uuid": "244349b2-4b87-466b-899e-ddecfb8da776", 00:18:09.349 "strip_size_kb": 0, 00:18:09.349 "state": "online", 00:18:09.349 "raid_level": "raid1", 00:18:09.349 "superblock": true, 00:18:09.349 "num_base_bdevs": 2, 00:18:09.349 "num_base_bdevs_discovered": 1, 00:18:09.349 "num_base_bdevs_operational": 1, 00:18:09.349 "base_bdevs_list": [ 00:18:09.349 { 00:18:09.349 "name": null, 00:18:09.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.349 "is_configured": false, 00:18:09.349 "data_offset": 0, 00:18:09.349 "data_size": 7936 00:18:09.349 }, 00:18:09.349 { 00:18:09.349 "name": "BaseBdev2", 00:18:09.349 "uuid": "7324f268-cd30-52fc-836d-0f64d2db3b06", 00:18:09.349 "is_configured": true, 00:18:09.349 "data_offset": 256, 00:18:09.349 "data_size": 7936 00:18:09.349 } 00:18:09.349 ] 00:18:09.349 }' 00:18:09.349 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.350 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.919 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:09.919 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.919 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:09.919 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:09.919 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.919 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.919 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.919 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.919 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.919 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.919 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.919 "name": "raid_bdev1", 00:18:09.919 "uuid": "244349b2-4b87-466b-899e-ddecfb8da776", 00:18:09.919 "strip_size_kb": 0, 00:18:09.919 "state": "online", 00:18:09.919 "raid_level": "raid1", 00:18:09.919 "superblock": true, 00:18:09.919 "num_base_bdevs": 2, 00:18:09.919 "num_base_bdevs_discovered": 1, 00:18:09.919 "num_base_bdevs_operational": 1, 00:18:09.919 "base_bdevs_list": [ 00:18:09.919 { 00:18:09.919 "name": null, 00:18:09.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.919 "is_configured": false, 00:18:09.919 "data_offset": 0, 00:18:09.919 "data_size": 7936 00:18:09.919 }, 00:18:09.919 { 00:18:09.919 "name": "BaseBdev2", 00:18:09.919 "uuid": "7324f268-cd30-52fc-836d-0f64d2db3b06", 00:18:09.919 "is_configured": true, 00:18:09.919 "data_offset": 256, 00:18:09.919 "data_size": 7936 00:18:09.919 } 00:18:09.919 ] 00:18:09.919 }' 00:18:09.919 09:55:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.919 09:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:09.919 09:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.919 09:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:09.919 09:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:09.919 09:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.919 09:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.919 [2024-12-06 09:55:35.087038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:09.919 [2024-12-06 09:55:35.103856] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:09.919 09:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.919 09:55:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:09.920 [2024-12-06 09:55:35.105956] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:10.857 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:10.857 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.857 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:10.858 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:10.858 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.858 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.858 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.858 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.858 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.118 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.118 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.118 "name": "raid_bdev1", 00:18:11.118 "uuid": "244349b2-4b87-466b-899e-ddecfb8da776", 00:18:11.118 "strip_size_kb": 0, 00:18:11.118 "state": "online", 00:18:11.118 "raid_level": "raid1", 00:18:11.118 "superblock": true, 00:18:11.118 "num_base_bdevs": 2, 00:18:11.118 "num_base_bdevs_discovered": 2, 00:18:11.118 "num_base_bdevs_operational": 2, 00:18:11.118 "process": { 00:18:11.118 "type": "rebuild", 00:18:11.118 "target": "spare", 00:18:11.118 "progress": { 00:18:11.118 "blocks": 2560, 00:18:11.118 "percent": 32 00:18:11.118 } 00:18:11.118 }, 00:18:11.118 "base_bdevs_list": [ 00:18:11.118 { 00:18:11.118 "name": "spare", 00:18:11.118 "uuid": "b04239a2-1063-5fca-b6cb-73a61f5911ab", 00:18:11.118 "is_configured": true, 00:18:11.118 "data_offset": 256, 00:18:11.118 "data_size": 7936 00:18:11.118 }, 00:18:11.118 { 00:18:11.118 "name": "BaseBdev2", 00:18:11.118 "uuid": "7324f268-cd30-52fc-836d-0f64d2db3b06", 00:18:11.118 "is_configured": true, 00:18:11.118 "data_offset": 256, 00:18:11.118 "data_size": 7936 00:18:11.118 } 00:18:11.118 ] 00:18:11.118 }' 00:18:11.118 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.118 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:11.118 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.118 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:11.118 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:11.118 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:11.118 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:11.118 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:11.118 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:11.118 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:11.118 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=730 00:18:11.118 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:11.118 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:11.118 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.118 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:11.118 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:11.118 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.118 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.118 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.118 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.118 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.118 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.118 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.118 "name": "raid_bdev1", 00:18:11.118 "uuid": "244349b2-4b87-466b-899e-ddecfb8da776", 00:18:11.118 "strip_size_kb": 0, 00:18:11.118 "state": "online", 00:18:11.118 "raid_level": "raid1", 00:18:11.118 "superblock": true, 00:18:11.118 "num_base_bdevs": 2, 00:18:11.118 "num_base_bdevs_discovered": 2, 00:18:11.118 "num_base_bdevs_operational": 2, 00:18:11.118 "process": { 00:18:11.118 "type": "rebuild", 00:18:11.118 "target": "spare", 00:18:11.118 "progress": { 00:18:11.118 "blocks": 2816, 00:18:11.118 "percent": 35 00:18:11.118 } 00:18:11.118 }, 00:18:11.118 "base_bdevs_list": [ 00:18:11.118 { 00:18:11.118 "name": "spare", 00:18:11.118 "uuid": "b04239a2-1063-5fca-b6cb-73a61f5911ab", 00:18:11.118 "is_configured": true, 00:18:11.118 "data_offset": 256, 00:18:11.118 "data_size": 7936 00:18:11.118 }, 00:18:11.118 { 00:18:11.118 "name": "BaseBdev2", 00:18:11.118 "uuid": "7324f268-cd30-52fc-836d-0f64d2db3b06", 00:18:11.118 "is_configured": true, 00:18:11.118 "data_offset": 256, 00:18:11.118 "data_size": 7936 00:18:11.118 } 00:18:11.118 ] 00:18:11.118 }' 00:18:11.118 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.118 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:11.118 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.118 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:11.118 09:55:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:12.497 09:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:12.497 09:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:12.497 09:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.497 09:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:12.497 09:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:12.497 09:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.497 09:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.497 09:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.497 09:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.497 09:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.497 09:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.497 09:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.497 "name": "raid_bdev1", 00:18:12.497 "uuid": "244349b2-4b87-466b-899e-ddecfb8da776", 00:18:12.497 "strip_size_kb": 0, 00:18:12.497 "state": "online", 00:18:12.498 "raid_level": "raid1", 00:18:12.498 "superblock": true, 00:18:12.498 "num_base_bdevs": 2, 00:18:12.498 "num_base_bdevs_discovered": 2, 00:18:12.498 "num_base_bdevs_operational": 2, 00:18:12.498 "process": { 00:18:12.498 "type": "rebuild", 00:18:12.498 "target": "spare", 00:18:12.498 "progress": { 00:18:12.498 "blocks": 5632, 00:18:12.498 "percent": 70 00:18:12.498 } 00:18:12.498 }, 00:18:12.498 "base_bdevs_list": [ 00:18:12.498 { 00:18:12.498 "name": "spare", 00:18:12.498 "uuid": "b04239a2-1063-5fca-b6cb-73a61f5911ab", 00:18:12.498 "is_configured": true, 00:18:12.498 "data_offset": 256, 00:18:12.498 "data_size": 7936 00:18:12.498 }, 00:18:12.498 { 00:18:12.498 "name": "BaseBdev2", 00:18:12.498 "uuid": "7324f268-cd30-52fc-836d-0f64d2db3b06", 00:18:12.498 "is_configured": true, 00:18:12.498 "data_offset": 256, 00:18:12.498 "data_size": 7936 00:18:12.498 } 00:18:12.498 ] 00:18:12.498 }' 00:18:12.498 09:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.498 09:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:12.498 09:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.498 09:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:12.498 09:55:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:13.065 [2024-12-06 09:55:38.228000] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:13.065 [2024-12-06 09:55:38.228176] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:13.065 [2024-12-06 09:55:38.228295] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.324 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:13.324 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:13.324 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.324 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:13.324 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:13.324 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.324 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.324 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.324 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.324 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.324 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.324 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.324 "name": "raid_bdev1", 00:18:13.324 "uuid": "244349b2-4b87-466b-899e-ddecfb8da776", 00:18:13.324 "strip_size_kb": 0, 00:18:13.324 "state": "online", 00:18:13.324 "raid_level": "raid1", 00:18:13.324 "superblock": true, 00:18:13.324 "num_base_bdevs": 2, 00:18:13.324 "num_base_bdevs_discovered": 2, 00:18:13.324 "num_base_bdevs_operational": 2, 00:18:13.324 "base_bdevs_list": [ 00:18:13.324 { 00:18:13.324 "name": "spare", 00:18:13.324 "uuid": "b04239a2-1063-5fca-b6cb-73a61f5911ab", 00:18:13.324 "is_configured": true, 00:18:13.324 "data_offset": 256, 00:18:13.324 "data_size": 7936 00:18:13.324 }, 00:18:13.324 { 00:18:13.324 "name": "BaseBdev2", 00:18:13.324 "uuid": "7324f268-cd30-52fc-836d-0f64d2db3b06", 00:18:13.324 "is_configured": true, 00:18:13.324 "data_offset": 256, 00:18:13.324 "data_size": 7936 00:18:13.324 } 00:18:13.324 ] 00:18:13.324 }' 00:18:13.324 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.582 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:13.582 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.582 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:13.582 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:18:13.582 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:13.582 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.582 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:13.582 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:13.582 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.582 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.582 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.582 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.582 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.582 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.582 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.582 "name": "raid_bdev1", 00:18:13.582 "uuid": "244349b2-4b87-466b-899e-ddecfb8da776", 00:18:13.582 "strip_size_kb": 0, 00:18:13.582 "state": "online", 00:18:13.582 "raid_level": "raid1", 00:18:13.582 "superblock": true, 00:18:13.582 "num_base_bdevs": 2, 00:18:13.583 "num_base_bdevs_discovered": 2, 00:18:13.583 "num_base_bdevs_operational": 2, 00:18:13.583 "base_bdevs_list": [ 00:18:13.583 { 00:18:13.583 "name": "spare", 00:18:13.583 "uuid": "b04239a2-1063-5fca-b6cb-73a61f5911ab", 00:18:13.583 "is_configured": true, 00:18:13.583 "data_offset": 256, 00:18:13.583 "data_size": 7936 00:18:13.583 }, 00:18:13.583 { 00:18:13.583 "name": "BaseBdev2", 00:18:13.583 "uuid": "7324f268-cd30-52fc-836d-0f64d2db3b06", 00:18:13.583 "is_configured": true, 00:18:13.583 "data_offset": 256, 00:18:13.583 "data_size": 7936 00:18:13.583 } 00:18:13.583 ] 00:18:13.583 }' 00:18:13.583 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.583 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:13.583 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.583 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:13.583 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:13.583 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.583 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.583 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.583 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.583 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:13.583 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.583 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.583 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.583 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.583 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.583 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.583 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.583 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.583 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.583 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.583 "name": "raid_bdev1", 00:18:13.583 "uuid": "244349b2-4b87-466b-899e-ddecfb8da776", 00:18:13.583 "strip_size_kb": 0, 00:18:13.583 "state": "online", 00:18:13.583 "raid_level": "raid1", 00:18:13.583 "superblock": true, 00:18:13.583 "num_base_bdevs": 2, 00:18:13.583 "num_base_bdevs_discovered": 2, 00:18:13.583 "num_base_bdevs_operational": 2, 00:18:13.583 "base_bdevs_list": [ 00:18:13.583 { 00:18:13.583 "name": "spare", 00:18:13.583 "uuid": "b04239a2-1063-5fca-b6cb-73a61f5911ab", 00:18:13.583 "is_configured": true, 00:18:13.583 "data_offset": 256, 00:18:13.583 "data_size": 7936 00:18:13.583 }, 00:18:13.583 { 00:18:13.583 "name": "BaseBdev2", 00:18:13.583 "uuid": "7324f268-cd30-52fc-836d-0f64d2db3b06", 00:18:13.583 "is_configured": true, 00:18:13.583 "data_offset": 256, 00:18:13.583 "data_size": 7936 00:18:13.583 } 00:18:13.583 ] 00:18:13.583 }' 00:18:13.583 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.583 09:55:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.150 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:14.150 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.150 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.150 [2024-12-06 09:55:39.215595] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:14.150 [2024-12-06 09:55:39.215670] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:14.150 [2024-12-06 09:55:39.215775] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:14.150 [2024-12-06 09:55:39.215879] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:14.150 [2024-12-06 09:55:39.215924] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:14.150 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.150 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:18:14.150 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.150 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.150 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.150 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.150 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:14.150 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:14.150 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:14.150 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:14.150 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.150 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.150 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.150 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:14.150 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.150 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.150 [2024-12-06 09:55:39.271495] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:14.150 [2024-12-06 09:55:39.271547] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.150 [2024-12-06 09:55:39.271570] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:14.150 [2024-12-06 09:55:39.271579] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.150 [2024-12-06 09:55:39.273699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.150 [2024-12-06 09:55:39.273735] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:14.150 [2024-12-06 09:55:39.273792] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:14.150 [2024-12-06 09:55:39.273842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:14.150 [2024-12-06 09:55:39.273954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:14.150 spare 00:18:14.150 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.150 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:14.151 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.151 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.151 [2024-12-06 09:55:39.373846] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:14.151 [2024-12-06 09:55:39.373932] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:14.151 [2024-12-06 09:55:39.374057] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:14.151 [2024-12-06 09:55:39.374175] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:14.151 [2024-12-06 09:55:39.374186] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:14.151 [2024-12-06 09:55:39.374294] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.151 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.151 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:14.151 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.151 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.151 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.151 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.151 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:14.151 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.151 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.151 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.151 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.151 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.151 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.151 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.151 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.151 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.410 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.410 "name": "raid_bdev1", 00:18:14.410 "uuid": "244349b2-4b87-466b-899e-ddecfb8da776", 00:18:14.410 "strip_size_kb": 0, 00:18:14.410 "state": "online", 00:18:14.410 "raid_level": "raid1", 00:18:14.410 "superblock": true, 00:18:14.410 "num_base_bdevs": 2, 00:18:14.410 "num_base_bdevs_discovered": 2, 00:18:14.410 "num_base_bdevs_operational": 2, 00:18:14.410 "base_bdevs_list": [ 00:18:14.410 { 00:18:14.410 "name": "spare", 00:18:14.410 "uuid": "b04239a2-1063-5fca-b6cb-73a61f5911ab", 00:18:14.410 "is_configured": true, 00:18:14.410 "data_offset": 256, 00:18:14.410 "data_size": 7936 00:18:14.410 }, 00:18:14.410 { 00:18:14.410 "name": "BaseBdev2", 00:18:14.410 "uuid": "7324f268-cd30-52fc-836d-0f64d2db3b06", 00:18:14.410 "is_configured": true, 00:18:14.410 "data_offset": 256, 00:18:14.410 "data_size": 7936 00:18:14.410 } 00:18:14.410 ] 00:18:14.410 }' 00:18:14.410 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.410 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.668 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:14.668 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.668 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:14.668 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:14.668 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.669 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.669 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.669 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.669 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.669 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.669 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.669 "name": "raid_bdev1", 00:18:14.669 "uuid": "244349b2-4b87-466b-899e-ddecfb8da776", 00:18:14.669 "strip_size_kb": 0, 00:18:14.669 "state": "online", 00:18:14.669 "raid_level": "raid1", 00:18:14.669 "superblock": true, 00:18:14.669 "num_base_bdevs": 2, 00:18:14.669 "num_base_bdevs_discovered": 2, 00:18:14.669 "num_base_bdevs_operational": 2, 00:18:14.669 "base_bdevs_list": [ 00:18:14.669 { 00:18:14.669 "name": "spare", 00:18:14.669 "uuid": "b04239a2-1063-5fca-b6cb-73a61f5911ab", 00:18:14.669 "is_configured": true, 00:18:14.669 "data_offset": 256, 00:18:14.669 "data_size": 7936 00:18:14.669 }, 00:18:14.669 { 00:18:14.669 "name": "BaseBdev2", 00:18:14.669 "uuid": "7324f268-cd30-52fc-836d-0f64d2db3b06", 00:18:14.669 "is_configured": true, 00:18:14.669 "data_offset": 256, 00:18:14.669 "data_size": 7936 00:18:14.669 } 00:18:14.669 ] 00:18:14.669 }' 00:18:14.669 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.669 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:14.669 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.669 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:14.669 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.669 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.669 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.669 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:14.669 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.928 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:14.928 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:14.928 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.928 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.928 [2024-12-06 09:55:39.966383] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:14.928 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.928 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:14.928 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.928 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.928 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.928 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.928 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:14.928 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.928 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.928 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.928 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.928 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.928 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.928 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.928 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.928 09:55:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.928 09:55:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.928 "name": "raid_bdev1", 00:18:14.928 "uuid": "244349b2-4b87-466b-899e-ddecfb8da776", 00:18:14.928 "strip_size_kb": 0, 00:18:14.928 "state": "online", 00:18:14.928 "raid_level": "raid1", 00:18:14.928 "superblock": true, 00:18:14.928 "num_base_bdevs": 2, 00:18:14.928 "num_base_bdevs_discovered": 1, 00:18:14.928 "num_base_bdevs_operational": 1, 00:18:14.928 "base_bdevs_list": [ 00:18:14.928 { 00:18:14.928 "name": null, 00:18:14.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.928 "is_configured": false, 00:18:14.928 "data_offset": 0, 00:18:14.928 "data_size": 7936 00:18:14.928 }, 00:18:14.928 { 00:18:14.928 "name": "BaseBdev2", 00:18:14.928 "uuid": "7324f268-cd30-52fc-836d-0f64d2db3b06", 00:18:14.928 "is_configured": true, 00:18:14.928 "data_offset": 256, 00:18:14.928 "data_size": 7936 00:18:14.928 } 00:18:14.928 ] 00:18:14.928 }' 00:18:14.928 09:55:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.928 09:55:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.187 09:55:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:15.187 09:55:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.187 09:55:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.187 [2024-12-06 09:55:40.409596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:15.187 [2024-12-06 09:55:40.409838] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:15.187 [2024-12-06 09:55:40.409897] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:15.187 [2024-12-06 09:55:40.409956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:15.187 [2024-12-06 09:55:40.426740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:15.187 09:55:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.187 09:55:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:15.187 [2024-12-06 09:55:40.428886] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.565 "name": "raid_bdev1", 00:18:16.565 "uuid": "244349b2-4b87-466b-899e-ddecfb8da776", 00:18:16.565 "strip_size_kb": 0, 00:18:16.565 "state": "online", 00:18:16.565 "raid_level": "raid1", 00:18:16.565 "superblock": true, 00:18:16.565 "num_base_bdevs": 2, 00:18:16.565 "num_base_bdevs_discovered": 2, 00:18:16.565 "num_base_bdevs_operational": 2, 00:18:16.565 "process": { 00:18:16.565 "type": "rebuild", 00:18:16.565 "target": "spare", 00:18:16.565 "progress": { 00:18:16.565 "blocks": 2560, 00:18:16.565 "percent": 32 00:18:16.565 } 00:18:16.565 }, 00:18:16.565 "base_bdevs_list": [ 00:18:16.565 { 00:18:16.565 "name": "spare", 00:18:16.565 "uuid": "b04239a2-1063-5fca-b6cb-73a61f5911ab", 00:18:16.565 "is_configured": true, 00:18:16.565 "data_offset": 256, 00:18:16.565 "data_size": 7936 00:18:16.565 }, 00:18:16.565 { 00:18:16.565 "name": "BaseBdev2", 00:18:16.565 "uuid": "7324f268-cd30-52fc-836d-0f64d2db3b06", 00:18:16.565 "is_configured": true, 00:18:16.565 "data_offset": 256, 00:18:16.565 "data_size": 7936 00:18:16.565 } 00:18:16.565 ] 00:18:16.565 }' 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.565 [2024-12-06 09:55:41.584108] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:16.565 [2024-12-06 09:55:41.637580] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:16.565 [2024-12-06 09:55:41.637708] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.565 [2024-12-06 09:55:41.637723] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:16.565 [2024-12-06 09:55:41.637733] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.565 "name": "raid_bdev1", 00:18:16.565 "uuid": "244349b2-4b87-466b-899e-ddecfb8da776", 00:18:16.565 "strip_size_kb": 0, 00:18:16.565 "state": "online", 00:18:16.565 "raid_level": "raid1", 00:18:16.565 "superblock": true, 00:18:16.565 "num_base_bdevs": 2, 00:18:16.565 "num_base_bdevs_discovered": 1, 00:18:16.565 "num_base_bdevs_operational": 1, 00:18:16.565 "base_bdevs_list": [ 00:18:16.565 { 00:18:16.565 "name": null, 00:18:16.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.565 "is_configured": false, 00:18:16.565 "data_offset": 0, 00:18:16.565 "data_size": 7936 00:18:16.565 }, 00:18:16.565 { 00:18:16.565 "name": "BaseBdev2", 00:18:16.565 "uuid": "7324f268-cd30-52fc-836d-0f64d2db3b06", 00:18:16.565 "is_configured": true, 00:18:16.565 "data_offset": 256, 00:18:16.565 "data_size": 7936 00:18:16.565 } 00:18:16.565 ] 00:18:16.565 }' 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.565 09:55:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.825 09:55:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:16.825 09:55:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.825 09:55:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.825 [2024-12-06 09:55:42.074771] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:16.825 [2024-12-06 09:55:42.074891] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.825 [2024-12-06 09:55:42.074944] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:16.825 [2024-12-06 09:55:42.074976] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.825 [2024-12-06 09:55:42.075241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.825 [2024-12-06 09:55:42.075291] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:16.825 [2024-12-06 09:55:42.075370] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:16.825 [2024-12-06 09:55:42.075406] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:16.825 [2024-12-06 09:55:42.075445] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:16.825 [2024-12-06 09:55:42.075517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:16.825 [2024-12-06 09:55:42.091433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:16.825 spare 00:18:16.825 09:55:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.825 09:55:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:16.825 [2024-12-06 09:55:42.093614] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.208 "name": "raid_bdev1", 00:18:18.208 "uuid": "244349b2-4b87-466b-899e-ddecfb8da776", 00:18:18.208 "strip_size_kb": 0, 00:18:18.208 "state": "online", 00:18:18.208 "raid_level": "raid1", 00:18:18.208 "superblock": true, 00:18:18.208 "num_base_bdevs": 2, 00:18:18.208 "num_base_bdevs_discovered": 2, 00:18:18.208 "num_base_bdevs_operational": 2, 00:18:18.208 "process": { 00:18:18.208 "type": "rebuild", 00:18:18.208 "target": "spare", 00:18:18.208 "progress": { 00:18:18.208 "blocks": 2560, 00:18:18.208 "percent": 32 00:18:18.208 } 00:18:18.208 }, 00:18:18.208 "base_bdevs_list": [ 00:18:18.208 { 00:18:18.208 "name": "spare", 00:18:18.208 "uuid": "b04239a2-1063-5fca-b6cb-73a61f5911ab", 00:18:18.208 "is_configured": true, 00:18:18.208 "data_offset": 256, 00:18:18.208 "data_size": 7936 00:18:18.208 }, 00:18:18.208 { 00:18:18.208 "name": "BaseBdev2", 00:18:18.208 "uuid": "7324f268-cd30-52fc-836d-0f64d2db3b06", 00:18:18.208 "is_configured": true, 00:18:18.208 "data_offset": 256, 00:18:18.208 "data_size": 7936 00:18:18.208 } 00:18:18.208 ] 00:18:18.208 }' 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.208 [2024-12-06 09:55:43.232733] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:18.208 [2024-12-06 09:55:43.302254] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:18.208 [2024-12-06 09:55:43.302308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.208 [2024-12-06 09:55:43.302327] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:18.208 [2024-12-06 09:55:43.302334] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.208 "name": "raid_bdev1", 00:18:18.208 "uuid": "244349b2-4b87-466b-899e-ddecfb8da776", 00:18:18.208 "strip_size_kb": 0, 00:18:18.208 "state": "online", 00:18:18.208 "raid_level": "raid1", 00:18:18.208 "superblock": true, 00:18:18.208 "num_base_bdevs": 2, 00:18:18.208 "num_base_bdevs_discovered": 1, 00:18:18.208 "num_base_bdevs_operational": 1, 00:18:18.208 "base_bdevs_list": [ 00:18:18.208 { 00:18:18.208 "name": null, 00:18:18.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.208 "is_configured": false, 00:18:18.208 "data_offset": 0, 00:18:18.208 "data_size": 7936 00:18:18.208 }, 00:18:18.208 { 00:18:18.208 "name": "BaseBdev2", 00:18:18.208 "uuid": "7324f268-cd30-52fc-836d-0f64d2db3b06", 00:18:18.208 "is_configured": true, 00:18:18.208 "data_offset": 256, 00:18:18.208 "data_size": 7936 00:18:18.208 } 00:18:18.208 ] 00:18:18.208 }' 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.208 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.779 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:18.779 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.779 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:18.779 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:18.779 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.779 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.779 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.779 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.779 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.780 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.780 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.780 "name": "raid_bdev1", 00:18:18.780 "uuid": "244349b2-4b87-466b-899e-ddecfb8da776", 00:18:18.780 "strip_size_kb": 0, 00:18:18.780 "state": "online", 00:18:18.780 "raid_level": "raid1", 00:18:18.780 "superblock": true, 00:18:18.780 "num_base_bdevs": 2, 00:18:18.780 "num_base_bdevs_discovered": 1, 00:18:18.780 "num_base_bdevs_operational": 1, 00:18:18.780 "base_bdevs_list": [ 00:18:18.780 { 00:18:18.780 "name": null, 00:18:18.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.780 "is_configured": false, 00:18:18.780 "data_offset": 0, 00:18:18.780 "data_size": 7936 00:18:18.780 }, 00:18:18.780 { 00:18:18.780 "name": "BaseBdev2", 00:18:18.780 "uuid": "7324f268-cd30-52fc-836d-0f64d2db3b06", 00:18:18.780 "is_configured": true, 00:18:18.780 "data_offset": 256, 00:18:18.780 "data_size": 7936 00:18:18.780 } 00:18:18.780 ] 00:18:18.780 }' 00:18:18.780 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.780 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:18.780 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.780 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:18.780 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:18.780 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.780 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.780 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.780 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:18.780 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.780 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.780 [2024-12-06 09:55:43.886201] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:18.780 [2024-12-06 09:55:43.886262] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.780 [2024-12-06 09:55:43.886286] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:18.780 [2024-12-06 09:55:43.886296] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.780 [2024-12-06 09:55:43.886503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.780 [2024-12-06 09:55:43.886523] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:18.780 [2024-12-06 09:55:43.886578] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:18.780 [2024-12-06 09:55:43.886596] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:18.780 [2024-12-06 09:55:43.886606] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:18.780 [2024-12-06 09:55:43.886618] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:18.780 BaseBdev1 00:18:18.780 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.780 09:55:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:19.719 09:55:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:19.719 09:55:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:19.719 09:55:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:19.719 09:55:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.719 09:55:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.719 09:55:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:19.719 09:55:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.719 09:55:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.719 09:55:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.719 09:55:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.719 09:55:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.719 09:55:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.719 09:55:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.719 09:55:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.719 09:55:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.719 09:55:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.719 "name": "raid_bdev1", 00:18:19.719 "uuid": "244349b2-4b87-466b-899e-ddecfb8da776", 00:18:19.719 "strip_size_kb": 0, 00:18:19.719 "state": "online", 00:18:19.719 "raid_level": "raid1", 00:18:19.719 "superblock": true, 00:18:19.719 "num_base_bdevs": 2, 00:18:19.719 "num_base_bdevs_discovered": 1, 00:18:19.719 "num_base_bdevs_operational": 1, 00:18:19.719 "base_bdevs_list": [ 00:18:19.719 { 00:18:19.719 "name": null, 00:18:19.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.719 "is_configured": false, 00:18:19.719 "data_offset": 0, 00:18:19.719 "data_size": 7936 00:18:19.719 }, 00:18:19.719 { 00:18:19.720 "name": "BaseBdev2", 00:18:19.720 "uuid": "7324f268-cd30-52fc-836d-0f64d2db3b06", 00:18:19.720 "is_configured": true, 00:18:19.720 "data_offset": 256, 00:18:19.720 "data_size": 7936 00:18:19.720 } 00:18:19.720 ] 00:18:19.720 }' 00:18:19.720 09:55:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.720 09:55:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.291 09:55:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:20.291 09:55:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.291 09:55:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:20.291 09:55:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:20.291 09:55:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.291 09:55:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.291 09:55:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.291 09:55:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.291 09:55:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.291 09:55:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.291 09:55:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.291 "name": "raid_bdev1", 00:18:20.291 "uuid": "244349b2-4b87-466b-899e-ddecfb8da776", 00:18:20.291 "strip_size_kb": 0, 00:18:20.291 "state": "online", 00:18:20.291 "raid_level": "raid1", 00:18:20.291 "superblock": true, 00:18:20.291 "num_base_bdevs": 2, 00:18:20.291 "num_base_bdevs_discovered": 1, 00:18:20.291 "num_base_bdevs_operational": 1, 00:18:20.291 "base_bdevs_list": [ 00:18:20.291 { 00:18:20.291 "name": null, 00:18:20.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.291 "is_configured": false, 00:18:20.291 "data_offset": 0, 00:18:20.291 "data_size": 7936 00:18:20.291 }, 00:18:20.291 { 00:18:20.291 "name": "BaseBdev2", 00:18:20.291 "uuid": "7324f268-cd30-52fc-836d-0f64d2db3b06", 00:18:20.291 "is_configured": true, 00:18:20.291 "data_offset": 256, 00:18:20.291 "data_size": 7936 00:18:20.291 } 00:18:20.291 ] 00:18:20.291 }' 00:18:20.291 09:55:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.291 09:55:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:20.291 09:55:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.291 09:55:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:20.291 09:55:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:20.291 09:55:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:20.291 09:55:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:20.291 09:55:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:20.291 09:55:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.291 09:55:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:20.291 09:55:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.291 09:55:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:20.291 09:55:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.291 09:55:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.291 [2024-12-06 09:55:45.459672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:20.291 [2024-12-06 09:55:45.459882] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:20.291 [2024-12-06 09:55:45.459902] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:20.291 request: 00:18:20.291 { 00:18:20.291 "base_bdev": "BaseBdev1", 00:18:20.291 "raid_bdev": "raid_bdev1", 00:18:20.291 "method": "bdev_raid_add_base_bdev", 00:18:20.291 "req_id": 1 00:18:20.291 } 00:18:20.291 Got JSON-RPC error response 00:18:20.291 response: 00:18:20.291 { 00:18:20.291 "code": -22, 00:18:20.291 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:20.291 } 00:18:20.291 09:55:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:20.291 09:55:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:20.291 09:55:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:20.291 09:55:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:20.291 09:55:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:20.291 09:55:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:21.228 09:55:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:21.229 09:55:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:21.229 09:55:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.229 09:55:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.229 09:55:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.229 09:55:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:21.229 09:55:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.229 09:55:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.229 09:55:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.229 09:55:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.229 09:55:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.229 09:55:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.229 09:55:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.229 09:55:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.229 09:55:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.489 09:55:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.489 "name": "raid_bdev1", 00:18:21.489 "uuid": "244349b2-4b87-466b-899e-ddecfb8da776", 00:18:21.489 "strip_size_kb": 0, 00:18:21.489 "state": "online", 00:18:21.489 "raid_level": "raid1", 00:18:21.489 "superblock": true, 00:18:21.489 "num_base_bdevs": 2, 00:18:21.489 "num_base_bdevs_discovered": 1, 00:18:21.489 "num_base_bdevs_operational": 1, 00:18:21.489 "base_bdevs_list": [ 00:18:21.489 { 00:18:21.489 "name": null, 00:18:21.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.489 "is_configured": false, 00:18:21.489 "data_offset": 0, 00:18:21.489 "data_size": 7936 00:18:21.489 }, 00:18:21.489 { 00:18:21.489 "name": "BaseBdev2", 00:18:21.489 "uuid": "7324f268-cd30-52fc-836d-0f64d2db3b06", 00:18:21.489 "is_configured": true, 00:18:21.489 "data_offset": 256, 00:18:21.489 "data_size": 7936 00:18:21.489 } 00:18:21.489 ] 00:18:21.489 }' 00:18:21.489 09:55:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.489 09:55:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.748 09:55:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:21.748 09:55:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.748 09:55:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:21.748 09:55:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:21.748 09:55:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.748 09:55:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.748 09:55:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.748 09:55:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.748 09:55:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.748 09:55:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.748 09:55:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.748 "name": "raid_bdev1", 00:18:21.748 "uuid": "244349b2-4b87-466b-899e-ddecfb8da776", 00:18:21.748 "strip_size_kb": 0, 00:18:21.748 "state": "online", 00:18:21.748 "raid_level": "raid1", 00:18:21.748 "superblock": true, 00:18:21.748 "num_base_bdevs": 2, 00:18:21.748 "num_base_bdevs_discovered": 1, 00:18:21.748 "num_base_bdevs_operational": 1, 00:18:21.748 "base_bdevs_list": [ 00:18:21.748 { 00:18:21.748 "name": null, 00:18:21.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.748 "is_configured": false, 00:18:21.748 "data_offset": 0, 00:18:21.748 "data_size": 7936 00:18:21.748 }, 00:18:21.748 { 00:18:21.748 "name": "BaseBdev2", 00:18:21.748 "uuid": "7324f268-cd30-52fc-836d-0f64d2db3b06", 00:18:21.748 "is_configured": true, 00:18:21.748 "data_offset": 256, 00:18:21.748 "data_size": 7936 00:18:21.748 } 00:18:21.748 ] 00:18:21.748 }' 00:18:21.748 09:55:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.748 09:55:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:21.748 09:55:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.748 09:55:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:21.748 09:55:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 88914 00:18:21.748 09:55:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88914 ']' 00:18:21.748 09:55:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88914 00:18:21.748 09:55:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:22.008 09:55:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:22.008 09:55:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88914 00:18:22.009 09:55:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:22.009 09:55:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:22.009 killing process with pid 88914 00:18:22.009 09:55:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88914' 00:18:22.009 09:55:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88914 00:18:22.009 Received shutdown signal, test time was about 60.000000 seconds 00:18:22.009 00:18:22.009 Latency(us) 00:18:22.009 [2024-12-06T09:55:47.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.009 [2024-12-06T09:55:47.282Z] =================================================================================================================== 00:18:22.009 [2024-12-06T09:55:47.282Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:22.009 [2024-12-06 09:55:47.053157] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:22.009 [2024-12-06 09:55:47.053304] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:22.009 [2024-12-06 09:55:47.053378] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:22.009 09:55:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88914 00:18:22.009 [2024-12-06 09:55:47.053391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:22.275 [2024-12-06 09:55:47.377464] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:23.705 09:55:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:18:23.705 00:18:23.705 real 0m17.346s 00:18:23.705 user 0m22.390s 00:18:23.705 sys 0m1.745s 00:18:23.705 09:55:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:23.705 09:55:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.705 ************************************ 00:18:23.705 END TEST raid_rebuild_test_sb_md_interleaved 00:18:23.705 ************************************ 00:18:23.705 09:55:48 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:18:23.705 09:55:48 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:18:23.705 09:55:48 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 88914 ']' 00:18:23.705 09:55:48 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 88914 00:18:23.705 09:55:48 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:18:23.705 00:18:23.705 real 11m51.905s 00:18:23.705 user 16m3.948s 00:18:23.705 sys 1m49.082s 00:18:23.705 09:55:48 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:23.705 09:55:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:23.705 ************************************ 00:18:23.705 END TEST bdev_raid 00:18:23.705 ************************************ 00:18:23.705 09:55:48 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:23.705 09:55:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:23.705 09:55:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:23.705 09:55:48 -- common/autotest_common.sh@10 -- # set +x 00:18:23.705 ************************************ 00:18:23.705 START TEST spdkcli_raid 00:18:23.705 ************************************ 00:18:23.705 09:55:48 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:23.705 * Looking for test storage... 00:18:23.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:23.705 09:55:48 spdkcli_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:23.705 09:55:48 spdkcli_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:18:23.705 09:55:48 spdkcli_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:23.705 09:55:48 spdkcli_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:23.705 09:55:48 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:23.705 09:55:48 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:23.705 09:55:48 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:23.705 09:55:48 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:18:23.705 09:55:48 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:18:23.705 09:55:48 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:18:23.705 09:55:48 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:18:23.705 09:55:48 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:18:23.705 09:55:48 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:18:23.705 09:55:48 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:18:23.705 09:55:48 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:23.705 09:55:48 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:18:23.705 09:55:48 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:18:23.705 09:55:48 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:23.705 09:55:48 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:23.705 09:55:48 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:18:23.705 09:55:48 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:18:23.705 09:55:48 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:23.705 09:55:48 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:18:23.705 09:55:48 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:23.705 09:55:48 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:18:23.705 09:55:48 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:18:23.705 09:55:48 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:23.705 09:55:48 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:18:23.705 09:55:48 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:23.705 09:55:48 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:23.705 09:55:48 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:23.705 09:55:48 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:18:23.705 09:55:48 spdkcli_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:23.705 09:55:48 spdkcli_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:23.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.705 --rc genhtml_branch_coverage=1 00:18:23.705 --rc genhtml_function_coverage=1 00:18:23.705 --rc genhtml_legend=1 00:18:23.705 --rc geninfo_all_blocks=1 00:18:23.705 --rc geninfo_unexecuted_blocks=1 00:18:23.705 00:18:23.705 ' 00:18:23.705 09:55:48 spdkcli_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:23.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.705 --rc genhtml_branch_coverage=1 00:18:23.705 --rc genhtml_function_coverage=1 00:18:23.705 --rc genhtml_legend=1 00:18:23.705 --rc geninfo_all_blocks=1 00:18:23.705 --rc geninfo_unexecuted_blocks=1 00:18:23.705 00:18:23.705 ' 00:18:23.705 09:55:48 spdkcli_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:23.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.705 --rc genhtml_branch_coverage=1 00:18:23.705 --rc genhtml_function_coverage=1 00:18:23.705 --rc genhtml_legend=1 00:18:23.705 --rc geninfo_all_blocks=1 00:18:23.705 --rc geninfo_unexecuted_blocks=1 00:18:23.705 00:18:23.705 ' 00:18:23.705 09:55:48 spdkcli_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:23.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.705 --rc genhtml_branch_coverage=1 00:18:23.705 --rc genhtml_function_coverage=1 00:18:23.705 --rc genhtml_legend=1 00:18:23.705 --rc geninfo_all_blocks=1 00:18:23.705 --rc geninfo_unexecuted_blocks=1 00:18:23.705 00:18:23.705 ' 00:18:23.705 09:55:48 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:23.705 09:55:48 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:23.705 09:55:48 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:23.705 09:55:48 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:18:23.705 09:55:48 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:18:23.705 09:55:48 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:18:23.705 09:55:48 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:18:23.705 09:55:48 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:18:23.705 09:55:48 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:18:23.705 09:55:48 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:18:23.705 09:55:48 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:18:23.705 09:55:48 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:18:23.705 09:55:48 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:18:23.705 09:55:48 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:18:23.705 09:55:48 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:18:23.705 09:55:48 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:18:23.705 09:55:48 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:18:23.705 09:55:48 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:18:23.705 09:55:48 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:18:23.705 09:55:48 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:18:23.705 09:55:48 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:18:23.705 09:55:48 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:18:23.705 09:55:48 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:18:23.705 09:55:48 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:18:23.705 09:55:48 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:18:23.705 09:55:48 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:23.705 09:55:48 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:23.705 09:55:48 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:23.705 09:55:48 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:23.705 09:55:48 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:23.705 09:55:48 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:23.705 09:55:48 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:18:23.705 09:55:48 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:18:23.705 09:55:48 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:23.705 09:55:48 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:23.706 09:55:48 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:18:23.706 09:55:48 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89586 00:18:23.706 09:55:48 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:18:23.706 09:55:48 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89586 00:18:23.706 09:55:48 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 89586 ']' 00:18:23.706 09:55:48 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.706 09:55:48 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:23.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.706 09:55:48 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.706 09:55:48 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:23.706 09:55:48 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:23.966 [2024-12-06 09:55:49.080506] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:18:23.966 [2024-12-06 09:55:49.080626] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89586 ] 00:18:24.226 [2024-12-06 09:55:49.257324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:24.226 [2024-12-06 09:55:49.397013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.226 [2024-12-06 09:55:49.397049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.166 09:55:50 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:25.166 09:55:50 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:18:25.166 09:55:50 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:18:25.166 09:55:50 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:25.166 09:55:50 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:25.166 09:55:50 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:18:25.166 09:55:50 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:25.166 09:55:50 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:25.166 09:55:50 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:18:25.166 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:18:25.166 ' 00:18:27.076 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:18:27.076 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:18:27.076 09:55:52 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:18:27.076 09:55:52 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:27.076 09:55:52 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:27.076 09:55:52 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:18:27.076 09:55:52 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:27.076 09:55:52 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:27.076 09:55:52 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:18:27.076 ' 00:18:28.014 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:18:28.014 09:55:53 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:18:28.014 09:55:53 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:28.014 09:55:53 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:28.274 09:55:53 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:18:28.274 09:55:53 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:28.274 09:55:53 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:28.274 09:55:53 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:18:28.274 09:55:53 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:18:28.843 09:55:53 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:18:28.843 09:55:53 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:18:28.843 09:55:53 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:18:28.843 09:55:53 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:28.843 09:55:53 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:28.843 09:55:53 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:18:28.843 09:55:53 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:28.843 09:55:53 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:28.843 09:55:53 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:18:28.843 ' 00:18:29.785 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:18:29.785 09:55:54 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:18:29.785 09:55:54 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:29.785 09:55:54 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:29.785 09:55:55 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:18:29.785 09:55:55 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:29.785 09:55:55 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:29.785 09:55:55 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:18:29.785 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:18:29.785 ' 00:18:31.163 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:18:31.163 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:18:31.443 09:55:56 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:18:31.443 09:55:56 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:31.443 09:55:56 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:31.443 09:55:56 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89586 00:18:31.443 09:55:56 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89586 ']' 00:18:31.443 09:55:56 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89586 00:18:31.443 09:55:56 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:18:31.443 09:55:56 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:31.443 09:55:56 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89586 00:18:31.443 09:55:56 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:31.443 09:55:56 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:31.443 killing process with pid 89586 00:18:31.443 09:55:56 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89586' 00:18:31.443 09:55:56 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 89586 00:18:31.444 09:55:56 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 89586 00:18:33.976 09:55:59 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:18:33.976 09:55:59 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89586 ']' 00:18:33.976 09:55:59 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89586 00:18:33.976 09:55:59 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89586 ']' 00:18:33.976 09:55:59 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89586 00:18:33.976 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89586) - No such process 00:18:33.976 Process with pid 89586 is not found 00:18:33.976 09:55:59 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 89586 is not found' 00:18:33.976 09:55:59 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:18:33.976 09:55:59 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:18:33.976 09:55:59 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:18:33.976 09:55:59 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:18:33.976 00:18:33.976 real 0m10.430s 00:18:33.976 user 0m21.228s 00:18:33.976 sys 0m1.356s 00:18:33.976 09:55:59 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:33.976 09:55:59 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:33.976 ************************************ 00:18:33.976 END TEST spdkcli_raid 00:18:33.976 ************************************ 00:18:33.976 09:55:59 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:33.976 09:55:59 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:33.976 09:55:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:33.976 09:55:59 -- common/autotest_common.sh@10 -- # set +x 00:18:33.976 ************************************ 00:18:33.976 START TEST blockdev_raid5f 00:18:33.976 ************************************ 00:18:33.976 09:55:59 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:34.235 * Looking for test storage... 00:18:34.235 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:34.235 09:55:59 blockdev_raid5f -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:34.235 09:55:59 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lcov --version 00:18:34.235 09:55:59 blockdev_raid5f -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:34.235 09:55:59 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:34.235 09:55:59 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:34.235 09:55:59 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:34.235 09:55:59 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:34.235 09:55:59 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:18:34.235 09:55:59 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:18:34.235 09:55:59 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:18:34.235 09:55:59 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:18:34.235 09:55:59 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:18:34.235 09:55:59 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:18:34.235 09:55:59 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:18:34.235 09:55:59 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:34.235 09:55:59 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:18:34.235 09:55:59 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:18:34.235 09:55:59 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:34.235 09:55:59 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:34.235 09:55:59 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:18:34.235 09:55:59 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:18:34.236 09:55:59 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:34.236 09:55:59 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:18:34.236 09:55:59 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:18:34.236 09:55:59 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:18:34.236 09:55:59 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:18:34.236 09:55:59 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:34.236 09:55:59 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:18:34.236 09:55:59 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:18:34.236 09:55:59 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:34.236 09:55:59 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:34.236 09:55:59 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:18:34.236 09:55:59 blockdev_raid5f -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:34.236 09:55:59 blockdev_raid5f -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:34.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.236 --rc genhtml_branch_coverage=1 00:18:34.236 --rc genhtml_function_coverage=1 00:18:34.236 --rc genhtml_legend=1 00:18:34.236 --rc geninfo_all_blocks=1 00:18:34.236 --rc geninfo_unexecuted_blocks=1 00:18:34.236 00:18:34.236 ' 00:18:34.236 09:55:59 blockdev_raid5f -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:34.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.236 --rc genhtml_branch_coverage=1 00:18:34.236 --rc genhtml_function_coverage=1 00:18:34.236 --rc genhtml_legend=1 00:18:34.236 --rc geninfo_all_blocks=1 00:18:34.236 --rc geninfo_unexecuted_blocks=1 00:18:34.236 00:18:34.236 ' 00:18:34.236 09:55:59 blockdev_raid5f -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:34.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.236 --rc genhtml_branch_coverage=1 00:18:34.236 --rc genhtml_function_coverage=1 00:18:34.236 --rc genhtml_legend=1 00:18:34.236 --rc geninfo_all_blocks=1 00:18:34.236 --rc geninfo_unexecuted_blocks=1 00:18:34.236 00:18:34.236 ' 00:18:34.236 09:55:59 blockdev_raid5f -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:34.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.236 --rc genhtml_branch_coverage=1 00:18:34.236 --rc genhtml_function_coverage=1 00:18:34.236 --rc genhtml_legend=1 00:18:34.236 --rc geninfo_all_blocks=1 00:18:34.236 --rc geninfo_unexecuted_blocks=1 00:18:34.236 00:18:34.236 ' 00:18:34.236 09:55:59 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:34.236 09:55:59 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:18:34.236 09:55:59 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:34.236 09:55:59 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:34.236 09:55:59 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:34.236 09:55:59 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:34.236 09:55:59 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:34.236 09:55:59 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:34.236 09:55:59 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:18:34.236 09:55:59 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:18:34.236 09:55:59 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:18:34.236 09:55:59 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:18:34.236 09:55:59 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:18:34.236 09:55:59 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:18:34.236 09:55:59 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:18:34.236 09:55:59 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:18:34.236 09:55:59 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:18:34.236 09:55:59 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:18:34.236 09:55:59 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:18:34.236 09:55:59 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:18:34.236 09:55:59 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:18:34.236 09:55:59 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:18:34.236 09:55:59 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:18:34.236 09:55:59 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:18:34.236 09:55:59 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=89872 00:18:34.236 09:55:59 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:34.236 09:55:59 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:34.236 09:55:59 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 89872 00:18:34.236 09:55:59 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 89872 ']' 00:18:34.236 09:55:59 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.236 09:55:59 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.236 09:55:59 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.236 09:55:59 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.236 09:55:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:34.495 [2024-12-06 09:55:59.570115] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:18:34.495 [2024-12-06 09:55:59.570250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89872 ] 00:18:34.495 [2024-12-06 09:55:59.755986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.754 [2024-12-06 09:55:59.890766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.692 09:56:00 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:35.692 09:56:00 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:18:35.692 09:56:00 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:18:35.692 09:56:00 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:18:35.692 09:56:00 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:18:35.692 09:56:00 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.692 09:56:00 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:35.692 Malloc0 00:18:35.951 Malloc1 00:18:35.951 Malloc2 00:18:35.951 09:56:01 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.951 09:56:01 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:18:35.951 09:56:01 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.951 09:56:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:35.951 09:56:01 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.951 09:56:01 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:18:35.951 09:56:01 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:18:35.951 09:56:01 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.951 09:56:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:35.951 09:56:01 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.951 09:56:01 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:18:35.951 09:56:01 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.951 09:56:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:35.951 09:56:01 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.951 09:56:01 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:35.951 09:56:01 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.951 09:56:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:35.951 09:56:01 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.951 09:56:01 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:18:35.951 09:56:01 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:18:35.951 09:56:01 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:18:35.951 09:56:01 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.951 09:56:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:35.951 09:56:01 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.951 09:56:01 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:18:35.951 09:56:01 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:18:35.951 09:56:01 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "0d61da1e-c1f7-49bb-aeb6-5d681bdfb021"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "0d61da1e-c1f7-49bb-aeb6-5d681bdfb021",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "0d61da1e-c1f7-49bb-aeb6-5d681bdfb021",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "0f7e9a9d-9c5a-4618-8651-f9f2d2575455",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "f03f0820-86f7-4236-9ec5-bfcf0c296b8f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "6120b93e-4b97-4817-9a06-a6fb2f3738a2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:35.951 09:56:01 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:18:35.951 09:56:01 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:18:35.951 09:56:01 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:18:35.951 09:56:01 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 89872 00:18:35.951 09:56:01 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 89872 ']' 00:18:35.951 09:56:01 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 89872 00:18:35.951 09:56:01 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:18:35.951 09:56:01 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:35.951 09:56:01 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89872 00:18:36.211 09:56:01 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:36.211 09:56:01 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:36.211 killing process with pid 89872 00:18:36.211 09:56:01 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89872' 00:18:36.211 09:56:01 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 89872 00:18:36.211 09:56:01 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 89872 00:18:39.534 09:56:04 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:39.534 09:56:04 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:39.534 09:56:04 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:39.534 09:56:04 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:39.534 09:56:04 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:39.534 ************************************ 00:18:39.534 START TEST bdev_hello_world 00:18:39.534 ************************************ 00:18:39.535 09:56:04 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:39.535 [2024-12-06 09:56:04.185850] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:18:39.535 [2024-12-06 09:56:04.185970] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89939 ] 00:18:39.535 [2024-12-06 09:56:04.363614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.535 [2024-12-06 09:56:04.501039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.104 [2024-12-06 09:56:05.115605] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:40.104 [2024-12-06 09:56:05.115666] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:18:40.104 [2024-12-06 09:56:05.115684] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:40.104 [2024-12-06 09:56:05.116189] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:40.104 [2024-12-06 09:56:05.116329] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:40.104 [2024-12-06 09:56:05.116350] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:40.104 [2024-12-06 09:56:05.116397] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:40.104 00:18:40.104 [2024-12-06 09:56:05.116418] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:18:41.599 00:18:41.599 real 0m2.498s 00:18:41.599 user 0m2.033s 00:18:41.599 sys 0m0.343s 00:18:41.599 09:56:06 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:41.599 09:56:06 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:41.599 ************************************ 00:18:41.599 END TEST bdev_hello_world 00:18:41.599 ************************************ 00:18:41.599 09:56:06 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:18:41.599 09:56:06 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:41.599 09:56:06 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:41.599 09:56:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:41.599 ************************************ 00:18:41.599 START TEST bdev_bounds 00:18:41.599 ************************************ 00:18:41.599 09:56:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:18:41.599 09:56:06 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=89987 00:18:41.599 09:56:06 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:18:41.599 09:56:06 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:41.599 09:56:06 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 89987' 00:18:41.599 Process bdevio pid: 89987 00:18:41.599 09:56:06 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 89987 00:18:41.599 09:56:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 89987 ']' 00:18:41.599 09:56:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.599 09:56:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:41.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.599 09:56:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.599 09:56:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:41.599 09:56:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:41.599 [2024-12-06 09:56:06.755939] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:18:41.599 [2024-12-06 09:56:06.756100] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89987 ] 00:18:41.859 [2024-12-06 09:56:06.937902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:41.859 [2024-12-06 09:56:07.075556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.859 [2024-12-06 09:56:07.075732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.859 [2024-12-06 09:56:07.075776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.428 09:56:07 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:42.428 09:56:07 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:18:42.428 09:56:07 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:42.687 I/O targets: 00:18:42.687 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:18:42.687 00:18:42.687 00:18:42.687 CUnit - A unit testing framework for C - Version 2.1-3 00:18:42.687 http://cunit.sourceforge.net/ 00:18:42.687 00:18:42.687 00:18:42.687 Suite: bdevio tests on: raid5f 00:18:42.687 Test: blockdev write read block ...passed 00:18:42.687 Test: blockdev write zeroes read block ...passed 00:18:42.687 Test: blockdev write zeroes read no split ...passed 00:18:42.687 Test: blockdev write zeroes read split ...passed 00:18:42.946 Test: blockdev write zeroes read split partial ...passed 00:18:42.946 Test: blockdev reset ...passed 00:18:42.946 Test: blockdev write read 8 blocks ...passed 00:18:42.946 Test: blockdev write read size > 128k ...passed 00:18:42.946 Test: blockdev write read invalid size ...passed 00:18:42.946 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:42.946 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:42.946 Test: blockdev write read max offset ...passed 00:18:42.946 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:42.946 Test: blockdev writev readv 8 blocks ...passed 00:18:42.946 Test: blockdev writev readv 30 x 1block ...passed 00:18:42.946 Test: blockdev writev readv block ...passed 00:18:42.946 Test: blockdev writev readv size > 128k ...passed 00:18:42.946 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:42.946 Test: blockdev comparev and writev ...passed 00:18:42.946 Test: blockdev nvme passthru rw ...passed 00:18:42.946 Test: blockdev nvme passthru vendor specific ...passed 00:18:42.946 Test: blockdev nvme admin passthru ...passed 00:18:42.946 Test: blockdev copy ...passed 00:18:42.946 00:18:42.946 Run Summary: Type Total Ran Passed Failed Inactive 00:18:42.946 suites 1 1 n/a 0 0 00:18:42.946 tests 23 23 23 0 0 00:18:42.946 asserts 130 130 130 0 n/a 00:18:42.946 00:18:42.946 Elapsed time = 0.673 seconds 00:18:42.946 0 00:18:42.946 09:56:08 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 89987 00:18:42.946 09:56:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 89987 ']' 00:18:42.946 09:56:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 89987 00:18:42.946 09:56:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:18:42.946 09:56:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.946 09:56:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89987 00:18:42.946 09:56:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:42.946 09:56:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:42.946 09:56:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89987' 00:18:42.946 killing process with pid 89987 00:18:42.946 09:56:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 89987 00:18:42.946 09:56:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 89987 00:18:44.852 09:56:09 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:18:44.852 00:18:44.852 real 0m2.992s 00:18:44.852 user 0m7.326s 00:18:44.852 sys 0m0.515s 00:18:44.852 09:56:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:44.852 09:56:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:44.852 ************************************ 00:18:44.852 END TEST bdev_bounds 00:18:44.852 ************************************ 00:18:44.852 09:56:09 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:44.852 09:56:09 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:44.852 09:56:09 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:44.852 09:56:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:44.852 ************************************ 00:18:44.852 START TEST bdev_nbd 00:18:44.852 ************************************ 00:18:44.852 09:56:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:44.852 09:56:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:18:44.852 09:56:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:18:44.852 09:56:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:44.852 09:56:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:44.852 09:56:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:18:44.852 09:56:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:18:44.852 09:56:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:18:44.852 09:56:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:18:44.852 09:56:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:18:44.852 09:56:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:18:44.852 09:56:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:18:44.852 09:56:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:18:44.852 09:56:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:18:44.852 09:56:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:18:44.852 09:56:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:18:44.852 09:56:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90052 00:18:44.852 09:56:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:18:44.852 09:56:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:44.852 09:56:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90052 /var/tmp/spdk-nbd.sock 00:18:44.852 09:56:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90052 ']' 00:18:44.852 09:56:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:44.852 09:56:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:44.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:44.852 09:56:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:44.852 09:56:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:44.852 09:56:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:44.852 [2024-12-06 09:56:09.828447] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:18:44.852 [2024-12-06 09:56:09.828579] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.852 [2024-12-06 09:56:10.004254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.112 [2024-12-06 09:56:10.141574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.681 09:56:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.682 09:56:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:18:45.682 09:56:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:18:45.682 09:56:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:45.682 09:56:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:18:45.682 09:56:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:18:45.682 09:56:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:18:45.682 09:56:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:45.682 09:56:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:18:45.682 09:56:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:18:45.682 09:56:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:18:45.682 09:56:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:18:45.682 09:56:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:18:45.682 09:56:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:18:45.682 09:56:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:18:45.942 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:18:45.942 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:18:45.942 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:18:45.942 09:56:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:45.942 09:56:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:45.942 09:56:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:45.942 09:56:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:45.942 09:56:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:45.942 09:56:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:45.942 09:56:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:45.942 09:56:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:45.942 09:56:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:45.942 1+0 records in 00:18:45.942 1+0 records out 00:18:45.942 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485393 s, 8.4 MB/s 00:18:45.942 09:56:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:45.942 09:56:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:45.942 09:56:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:45.942 09:56:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:45.942 09:56:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:45.942 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:45.942 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:18:45.942 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:46.201 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:18:46.201 { 00:18:46.201 "nbd_device": "/dev/nbd0", 00:18:46.201 "bdev_name": "raid5f" 00:18:46.201 } 00:18:46.201 ]' 00:18:46.201 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:18:46.201 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:18:46.201 { 00:18:46.201 "nbd_device": "/dev/nbd0", 00:18:46.201 "bdev_name": "raid5f" 00:18:46.201 } 00:18:46.201 ]' 00:18:46.201 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:18:46.201 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:46.201 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:46.202 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:46.202 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:46.202 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:46.202 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:46.202 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:46.461 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:46.461 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:46.461 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:46.461 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:46.461 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:46.461 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:46.461 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:46.461 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:46.461 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:46.461 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:46.461 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:46.461 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:46.461 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:46.461 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:46.721 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:46.721 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:46.721 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:46.721 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:46.721 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:46.721 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:46.721 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:18:46.721 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:18:46.721 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:18:46.721 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:18:46.721 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:46.721 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:18:46.721 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:46.721 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:18:46.721 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:46.721 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:18:46.721 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:46.721 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:18:46.721 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:46.721 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:46.721 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:46.721 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:18:46.721 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:46.721 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:46.721 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:18:46.721 /dev/nbd0 00:18:46.721 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:46.721 09:56:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:46.721 09:56:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:46.722 09:56:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:46.722 09:56:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:46.722 09:56:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:46.722 09:56:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:46.722 09:56:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:46.722 09:56:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:46.722 09:56:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:46.981 09:56:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:46.981 1+0 records in 00:18:46.981 1+0 records out 00:18:46.981 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413784 s, 9.9 MB/s 00:18:46.981 09:56:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.981 09:56:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:46.981 09:56:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.981 09:56:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:46.981 09:56:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:46.981 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:46.981 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:46.981 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:46.981 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:46.981 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:46.981 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:46.981 { 00:18:46.981 "nbd_device": "/dev/nbd0", 00:18:46.981 "bdev_name": "raid5f" 00:18:46.981 } 00:18:46.981 ]' 00:18:46.981 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:46.981 { 00:18:46.981 "nbd_device": "/dev/nbd0", 00:18:46.981 "bdev_name": "raid5f" 00:18:46.981 } 00:18:46.981 ]' 00:18:46.981 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:47.241 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:18:47.241 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:18:47.241 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:47.241 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:18:47.241 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:18:47.241 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:18:47.241 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:18:47.241 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:18:47.241 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:18:47.241 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:47.241 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:47.241 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:47.241 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:47.241 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:18:47.241 256+0 records in 00:18:47.241 256+0 records out 00:18:47.241 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120272 s, 87.2 MB/s 00:18:47.241 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:47.241 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:47.241 256+0 records in 00:18:47.241 256+0 records out 00:18:47.241 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0314834 s, 33.3 MB/s 00:18:47.241 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:18:47.241 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:18:47.241 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:47.241 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:47.241 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:47.241 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:47.241 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:47.241 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:47.241 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:18:47.241 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:47.241 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:47.241 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:47.241 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:47.241 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:47.241 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:47.241 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:47.241 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:47.500 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:47.500 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:47.500 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:47.500 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:47.500 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:47.500 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:47.500 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:47.500 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:47.500 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:47.500 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:47.500 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:47.500 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:47.760 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:47.760 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:47.760 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:47.760 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:47.760 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:47.760 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:47.760 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:47.760 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:47.760 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:18:47.760 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:47.761 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:18:47.761 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:47.761 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:47.761 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:18:47.761 09:56:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:18:47.761 malloc_lvol_verify 00:18:48.020 09:56:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:18:48.020 a4172e9d-82ee-47d0-8f89-e011e3ada6a8 00:18:48.020 09:56:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:18:48.280 26e1b363-ace7-44c6-a552-c933a4690489 00:18:48.280 09:56:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:18:48.540 /dev/nbd0 00:18:48.540 09:56:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:18:48.540 09:56:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:18:48.540 09:56:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:18:48.540 09:56:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:18:48.540 09:56:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:18:48.540 mke2fs 1.47.0 (5-Feb-2023) 00:18:48.540 Discarding device blocks: 0/4096 done 00:18:48.540 Creating filesystem with 4096 1k blocks and 1024 inodes 00:18:48.540 00:18:48.540 Allocating group tables: 0/1 done 00:18:48.540 Writing inode tables: 0/1 done 00:18:48.540 Creating journal (1024 blocks): done 00:18:48.540 Writing superblocks and filesystem accounting information: 0/1 done 00:18:48.540 00:18:48.540 09:56:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:48.540 09:56:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:48.540 09:56:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:48.540 09:56:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:48.540 09:56:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:48.540 09:56:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:48.540 09:56:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:48.800 09:56:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:48.800 09:56:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:48.800 09:56:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:48.800 09:56:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:48.800 09:56:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:48.800 09:56:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:48.800 09:56:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:48.800 09:56:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:48.800 09:56:13 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90052 00:18:48.800 09:56:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90052 ']' 00:18:48.800 09:56:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90052 00:18:48.800 09:56:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:18:48.800 09:56:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:48.800 09:56:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90052 00:18:48.800 09:56:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:48.800 09:56:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:48.800 09:56:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90052' 00:18:48.800 killing process with pid 90052 00:18:48.800 09:56:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90052 00:18:48.800 09:56:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90052 00:18:50.182 09:56:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:18:50.182 00:18:50.182 real 0m5.708s 00:18:50.182 user 0m7.508s 00:18:50.182 sys 0m1.342s 00:18:50.182 09:56:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:50.182 09:56:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:50.182 ************************************ 00:18:50.182 END TEST bdev_nbd 00:18:50.182 ************************************ 00:18:50.441 09:56:15 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:18:50.442 09:56:15 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:18:50.442 09:56:15 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:18:50.442 09:56:15 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:18:50.442 09:56:15 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:50.442 09:56:15 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:50.442 09:56:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:50.442 ************************************ 00:18:50.442 START TEST bdev_fio 00:18:50.442 ************************************ 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:18:50.442 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:50.442 ************************************ 00:18:50.442 START TEST bdev_fio_rw_verify 00:18:50.442 ************************************ 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:50.442 09:56:15 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:50.702 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:50.702 fio-3.35 00:18:50.702 Starting 1 thread 00:19:02.964 00:19:02.964 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90254: Fri Dec 6 09:56:26 2024 00:19:02.964 read: IOPS=11.7k, BW=45.8MiB/s (48.0MB/s)(458MiB/10001msec) 00:19:02.964 slat (usec): min=17, max=101, avg=20.38, stdev= 2.40 00:19:02.964 clat (usec): min=11, max=310, avg=137.79, stdev=48.34 00:19:02.965 lat (usec): min=31, max=338, avg=158.16, stdev=48.69 00:19:02.965 clat percentiles (usec): 00:19:02.965 | 50.000th=[ 139], 99.000th=[ 229], 99.900th=[ 260], 99.990th=[ 289], 00:19:02.965 | 99.999th=[ 306] 00:19:02.965 write: IOPS=12.3k, BW=48.1MiB/s (50.4MB/s)(475MiB/9870msec); 0 zone resets 00:19:02.965 slat (usec): min=7, max=223, avg=16.75, stdev= 3.77 00:19:02.965 clat (usec): min=62, max=1611, avg=315.10, stdev=44.44 00:19:02.965 lat (usec): min=84, max=1734, avg=331.86, stdev=45.52 00:19:02.965 clat percentiles (usec): 00:19:02.965 | 50.000th=[ 318], 99.000th=[ 400], 99.900th=[ 603], 99.990th=[ 1352], 00:19:02.965 | 99.999th=[ 1598] 00:19:02.965 bw ( KiB/s): min=45264, max=51120, per=98.67%, avg=48588.68, stdev=1873.26, samples=19 00:19:02.965 iops : min=11316, max=12780, avg=12147.16, stdev=468.33, samples=19 00:19:02.965 lat (usec) : 20=0.01%, 50=0.01%, 100=13.04%, 250=39.29%, 500=47.58% 00:19:02.965 lat (usec) : 750=0.06%, 1000=0.02% 00:19:02.965 lat (msec) : 2=0.01% 00:19:02.965 cpu : usr=98.99%, sys=0.30%, ctx=22, majf=0, minf=9682 00:19:02.965 IO depths : 1=7.7%, 2=20.0%, 4=55.0%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:02.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.965 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.965 issued rwts: total=117259,121513,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.965 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:02.965 00:19:02.965 Run status group 0 (all jobs): 00:19:02.965 READ: bw=45.8MiB/s (48.0MB/s), 45.8MiB/s-45.8MiB/s (48.0MB/s-48.0MB/s), io=458MiB (480MB), run=10001-10001msec 00:19:02.965 WRITE: bw=48.1MiB/s (50.4MB/s), 48.1MiB/s-48.1MiB/s (50.4MB/s-50.4MB/s), io=475MiB (498MB), run=9870-9870msec 00:19:03.536 ----------------------------------------------------- 00:19:03.536 Suppressions used: 00:19:03.536 count bytes template 00:19:03.536 1 7 /usr/src/fio/parse.c 00:19:03.536 622 59712 /usr/src/fio/iolog.c 00:19:03.536 1 8 libtcmalloc_minimal.so 00:19:03.536 1 904 libcrypto.so 00:19:03.536 ----------------------------------------------------- 00:19:03.536 00:19:03.536 00:19:03.536 real 0m13.035s 00:19:03.536 user 0m13.104s 00:19:03.536 sys 0m0.696s 00:19:03.536 09:56:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:03.536 09:56:28 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:03.536 ************************************ 00:19:03.536 END TEST bdev_fio_rw_verify 00:19:03.536 ************************************ 00:19:03.536 09:56:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:03.536 09:56:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:03.536 09:56:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:03.536 09:56:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:03.536 09:56:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:19:03.536 09:56:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:19:03.536 09:56:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:03.536 09:56:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:03.536 09:56:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:03.536 09:56:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:19:03.536 09:56:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:03.536 09:56:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:03.536 09:56:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:03.536 09:56:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:19:03.536 09:56:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:19:03.536 09:56:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:19:03.536 09:56:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:03.536 09:56:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "0d61da1e-c1f7-49bb-aeb6-5d681bdfb021"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "0d61da1e-c1f7-49bb-aeb6-5d681bdfb021",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "0d61da1e-c1f7-49bb-aeb6-5d681bdfb021",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "0f7e9a9d-9c5a-4618-8651-f9f2d2575455",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "f03f0820-86f7-4236-9ec5-bfcf0c296b8f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "6120b93e-4b97-4817-9a06-a6fb2f3738a2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:03.796 09:56:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:03.796 09:56:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:03.796 09:56:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:03.796 /home/vagrant/spdk_repo/spdk 00:19:03.796 09:56:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:03.796 09:56:28 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:03.796 00:19:03.796 real 0m13.323s 00:19:03.796 user 0m13.223s 00:19:03.796 sys 0m0.840s 00:19:03.796 09:56:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:03.796 09:56:28 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:03.796 ************************************ 00:19:03.796 END TEST bdev_fio 00:19:03.796 ************************************ 00:19:03.796 09:56:28 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:03.796 09:56:28 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:03.796 09:56:28 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:03.796 09:56:28 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:03.796 09:56:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:03.796 ************************************ 00:19:03.796 START TEST bdev_verify 00:19:03.796 ************************************ 00:19:03.796 09:56:28 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:03.796 [2024-12-06 09:56:28.972362] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:19:03.796 [2024-12-06 09:56:28.972494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90423 ] 00:19:04.056 [2024-12-06 09:56:29.147427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:04.056 [2024-12-06 09:56:29.289121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.056 [2024-12-06 09:56:29.289193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:04.627 Running I/O for 5 seconds... 00:19:06.945 10045.00 IOPS, 39.24 MiB/s [2024-12-06T09:56:33.156Z] 10107.50 IOPS, 39.48 MiB/s [2024-12-06T09:56:34.095Z] 10090.67 IOPS, 39.42 MiB/s [2024-12-06T09:56:35.036Z] 10084.25 IOPS, 39.39 MiB/s [2024-12-06T09:56:35.036Z] 10099.80 IOPS, 39.45 MiB/s 00:19:09.763 Latency(us) 00:19:09.763 [2024-12-06T09:56:35.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.763 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:09.763 Verification LBA range: start 0x0 length 0x2000 00:19:09.763 raid5f : 5.02 6038.58 23.59 0.00 0.00 31976.93 220.00 22665.73 00:19:09.763 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:09.763 Verification LBA range: start 0x2000 length 0x2000 00:19:09.763 raid5f : 5.02 4072.56 15.91 0.00 0.00 47291.86 151.14 34113.06 00:19:09.763 [2024-12-06T09:56:35.036Z] =================================================================================================================== 00:19:09.763 [2024-12-06T09:56:35.036Z] Total : 10111.13 39.50 0.00 0.00 38149.97 151.14 34113.06 00:19:11.677 00:19:11.677 real 0m7.535s 00:19:11.677 user 0m13.830s 00:19:11.677 sys 0m0.375s 00:19:11.677 09:56:36 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:11.677 09:56:36 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:11.677 ************************************ 00:19:11.677 END TEST bdev_verify 00:19:11.677 ************************************ 00:19:11.677 09:56:36 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:11.677 09:56:36 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:11.677 09:56:36 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:11.677 09:56:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:11.677 ************************************ 00:19:11.677 START TEST bdev_verify_big_io 00:19:11.677 ************************************ 00:19:11.677 09:56:36 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:11.677 [2024-12-06 09:56:36.575180] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:19:11.677 [2024-12-06 09:56:36.575287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90522 ] 00:19:11.677 [2024-12-06 09:56:36.749696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:11.677 [2024-12-06 09:56:36.892763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.677 [2024-12-06 09:56:36.892799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:12.284 Running I/O for 5 seconds... 00:19:14.626 633.00 IOPS, 39.56 MiB/s [2024-12-06T09:56:40.838Z] 696.50 IOPS, 43.53 MiB/s [2024-12-06T09:56:41.778Z] 718.33 IOPS, 44.90 MiB/s [2024-12-06T09:56:42.717Z] 729.25 IOPS, 45.58 MiB/s [2024-12-06T09:56:42.717Z] 710.80 IOPS, 44.42 MiB/s 00:19:17.445 Latency(us) 00:19:17.445 [2024-12-06T09:56:42.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.445 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:17.445 Verification LBA range: start 0x0 length 0x200 00:19:17.445 raid5f : 5.12 421.31 26.33 0.00 0.00 7620047.16 162.77 327851.71 00:19:17.445 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:17.445 Verification LBA range: start 0x200 length 0x200 00:19:17.445 raid5f : 5.19 318.11 19.88 0.00 0.00 10004905.09 210.17 415767.25 00:19:17.445 [2024-12-06T09:56:42.718Z] =================================================================================================================== 00:19:17.445 [2024-12-06T09:56:42.718Z] Total : 739.42 46.21 0.00 0.00 8653485.60 162.77 415767.25 00:19:19.354 00:19:19.354 real 0m7.756s 00:19:19.354 user 0m14.283s 00:19:19.354 sys 0m0.351s 00:19:19.354 09:56:44 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:19.354 09:56:44 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:19.354 ************************************ 00:19:19.354 END TEST bdev_verify_big_io 00:19:19.354 ************************************ 00:19:19.354 09:56:44 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:19.354 09:56:44 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:19.354 09:56:44 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:19.354 09:56:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:19.354 ************************************ 00:19:19.354 START TEST bdev_write_zeroes 00:19:19.354 ************************************ 00:19:19.354 09:56:44 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:19.354 [2024-12-06 09:56:44.406265] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:19:19.354 [2024-12-06 09:56:44.406393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90620 ] 00:19:19.354 [2024-12-06 09:56:44.580201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.614 [2024-12-06 09:56:44.725256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.184 Running I/O for 1 seconds... 00:19:21.123 27951.00 IOPS, 109.18 MiB/s 00:19:21.123 Latency(us) 00:19:21.123 [2024-12-06T09:56:46.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.123 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:21.123 raid5f : 1.01 27937.28 109.13 0.00 0.00 4567.75 1481.00 5924.00 00:19:21.123 [2024-12-06T09:56:46.396Z] =================================================================================================================== 00:19:21.123 [2024-12-06T09:56:46.396Z] Total : 27937.28 109.13 0.00 0.00 4567.75 1481.00 5924.00 00:19:23.031 00:19:23.031 real 0m3.537s 00:19:23.031 user 0m3.051s 00:19:23.031 sys 0m0.357s 00:19:23.031 09:56:47 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:23.031 09:56:47 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:23.031 ************************************ 00:19:23.031 END TEST bdev_write_zeroes 00:19:23.031 ************************************ 00:19:23.031 09:56:47 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:23.031 09:56:47 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:23.031 09:56:47 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:23.031 09:56:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:23.031 ************************************ 00:19:23.031 START TEST bdev_json_nonenclosed 00:19:23.031 ************************************ 00:19:23.031 09:56:47 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:23.031 [2024-12-06 09:56:48.011293] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:19:23.031 [2024-12-06 09:56:48.011413] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90679 ] 00:19:23.031 [2024-12-06 09:56:48.184276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.289 [2024-12-06 09:56:48.315963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.289 [2024-12-06 09:56:48.316106] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:23.289 [2024-12-06 09:56:48.316138] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:23.289 [2024-12-06 09:56:48.316160] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:23.547 00:19:23.547 real 0m0.646s 00:19:23.547 user 0m0.417s 00:19:23.547 sys 0m0.124s 00:19:23.547 09:56:48 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:23.547 09:56:48 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:23.547 ************************************ 00:19:23.547 END TEST bdev_json_nonenclosed 00:19:23.547 ************************************ 00:19:23.547 09:56:48 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:23.547 09:56:48 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:23.547 09:56:48 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:23.547 09:56:48 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:23.547 ************************************ 00:19:23.547 START TEST bdev_json_nonarray 00:19:23.547 ************************************ 00:19:23.547 09:56:48 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:23.547 [2024-12-06 09:56:48.721363] Starting SPDK v25.01-pre git sha1 eec618948 / DPDK 24.03.0 initialization... 00:19:23.547 [2024-12-06 09:56:48.721482] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90704 ] 00:19:23.806 [2024-12-06 09:56:48.892692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.806 [2024-12-06 09:56:49.021762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.806 [2024-12-06 09:56:49.021875] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:23.806 [2024-12-06 09:56:49.021896] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:23.806 [2024-12-06 09:56:49.021915] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:24.065 00:19:24.065 real 0m0.641s 00:19:24.065 user 0m0.413s 00:19:24.065 sys 0m0.124s 00:19:24.065 09:56:49 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.065 09:56:49 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:24.065 ************************************ 00:19:24.065 END TEST bdev_json_nonarray 00:19:24.065 ************************************ 00:19:24.324 09:56:49 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:19:24.324 09:56:49 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:19:24.324 09:56:49 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:19:24.324 09:56:49 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:19:24.324 09:56:49 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:19:24.324 09:56:49 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:24.324 09:56:49 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:24.324 09:56:49 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:19:24.324 09:56:49 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:19:24.324 09:56:49 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:19:24.324 09:56:49 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:19:24.324 00:19:24.324 real 0m50.128s 00:19:24.324 user 1m6.802s 00:19:24.324 sys 0m5.681s 00:19:24.324 09:56:49 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:24.324 09:56:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:24.324 ************************************ 00:19:24.324 END TEST blockdev_raid5f 00:19:24.324 ************************************ 00:19:24.324 09:56:49 -- spdk/autotest.sh@194 -- # uname -s 00:19:24.324 09:56:49 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:19:24.324 09:56:49 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:24.324 09:56:49 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:24.324 09:56:49 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:19:24.324 09:56:49 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:24.324 09:56:49 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:24.324 09:56:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:24.324 09:56:49 -- common/autotest_common.sh@10 -- # set +x 00:19:24.324 09:56:49 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:24.324 09:56:49 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:19:24.324 09:56:49 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:19:24.324 09:56:49 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:24.324 09:56:49 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:24.324 09:56:49 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:19:24.324 09:56:49 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:19:24.324 09:56:49 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:24.324 09:56:49 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:24.324 09:56:49 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:19:24.324 09:56:49 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:19:24.324 09:56:49 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:19:24.324 09:56:49 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:19:24.324 09:56:49 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:19:24.324 09:56:49 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:19:24.324 09:56:49 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:19:24.324 09:56:49 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:19:24.324 09:56:49 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:19:24.324 09:56:49 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:19:24.324 09:56:49 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:19:24.324 09:56:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:24.324 09:56:49 -- common/autotest_common.sh@10 -- # set +x 00:19:24.324 09:56:49 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:19:24.324 09:56:49 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:19:24.324 09:56:49 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:19:24.324 09:56:49 -- common/autotest_common.sh@10 -- # set +x 00:19:26.884 INFO: APP EXITING 00:19:26.884 INFO: killing all VMs 00:19:26.884 INFO: killing vhost app 00:19:26.884 INFO: EXIT DONE 00:19:26.884 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:26.884 Waiting for block devices as requested 00:19:27.144 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:27.144 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:28.084 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:28.084 Cleaning 00:19:28.084 Removing: /var/run/dpdk/spdk0/config 00:19:28.084 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:28.084 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:28.084 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:28.084 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:28.084 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:28.084 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:28.084 Removing: /dev/shm/spdk_tgt_trace.pid56910 00:19:28.084 Removing: /var/run/dpdk/spdk0 00:19:28.084 Removing: /var/run/dpdk/spdk_pid56675 00:19:28.084 Removing: /var/run/dpdk/spdk_pid56910 00:19:28.084 Removing: /var/run/dpdk/spdk_pid57139 00:19:28.084 Removing: /var/run/dpdk/spdk_pid57254 00:19:28.084 Removing: /var/run/dpdk/spdk_pid57309 00:19:28.084 Removing: /var/run/dpdk/spdk_pid57438 00:19:28.084 Removing: /var/run/dpdk/spdk_pid57456 00:19:28.084 Removing: /var/run/dpdk/spdk_pid57666 00:19:28.084 Removing: /var/run/dpdk/spdk_pid57782 00:19:28.084 Removing: /var/run/dpdk/spdk_pid57890 00:19:28.084 Removing: /var/run/dpdk/spdk_pid58012 00:19:28.084 Removing: /var/run/dpdk/spdk_pid58120 00:19:28.084 Removing: /var/run/dpdk/spdk_pid58160 00:19:28.084 Removing: /var/run/dpdk/spdk_pid58196 00:19:28.084 Removing: /var/run/dpdk/spdk_pid58267 00:19:28.084 Removing: /var/run/dpdk/spdk_pid58395 00:19:28.084 Removing: /var/run/dpdk/spdk_pid58841 00:19:28.084 Removing: /var/run/dpdk/spdk_pid58912 00:19:28.084 Removing: /var/run/dpdk/spdk_pid58986 00:19:28.084 Removing: /var/run/dpdk/spdk_pid59002 00:19:28.084 Removing: /var/run/dpdk/spdk_pid59152 00:19:28.084 Removing: /var/run/dpdk/spdk_pid59168 00:19:28.084 Removing: /var/run/dpdk/spdk_pid59319 00:19:28.344 Removing: /var/run/dpdk/spdk_pid59335 00:19:28.344 Removing: /var/run/dpdk/spdk_pid59405 00:19:28.344 Removing: /var/run/dpdk/spdk_pid59423 00:19:28.344 Removing: /var/run/dpdk/spdk_pid59493 00:19:28.344 Removing: /var/run/dpdk/spdk_pid59512 00:19:28.344 Removing: /var/run/dpdk/spdk_pid59713 00:19:28.344 Removing: /var/run/dpdk/spdk_pid59744 00:19:28.344 Removing: /var/run/dpdk/spdk_pid59833 00:19:28.344 Removing: /var/run/dpdk/spdk_pid61181 00:19:28.344 Removing: /var/run/dpdk/spdk_pid61391 00:19:28.344 Removing: /var/run/dpdk/spdk_pid61532 00:19:28.344 Removing: /var/run/dpdk/spdk_pid62176 00:19:28.344 Removing: /var/run/dpdk/spdk_pid62382 00:19:28.344 Removing: /var/run/dpdk/spdk_pid62523 00:19:28.344 Removing: /var/run/dpdk/spdk_pid63165 00:19:28.344 Removing: /var/run/dpdk/spdk_pid63495 00:19:28.344 Removing: /var/run/dpdk/spdk_pid63635 00:19:28.344 Removing: /var/run/dpdk/spdk_pid65026 00:19:28.344 Removing: /var/run/dpdk/spdk_pid65279 00:19:28.344 Removing: /var/run/dpdk/spdk_pid65425 00:19:28.344 Removing: /var/run/dpdk/spdk_pid66810 00:19:28.344 Removing: /var/run/dpdk/spdk_pid67063 00:19:28.344 Removing: /var/run/dpdk/spdk_pid67203 00:19:28.344 Removing: /var/run/dpdk/spdk_pid68588 00:19:28.344 Removing: /var/run/dpdk/spdk_pid69035 00:19:28.344 Removing: /var/run/dpdk/spdk_pid69175 00:19:28.344 Removing: /var/run/dpdk/spdk_pid70658 00:19:28.344 Removing: /var/run/dpdk/spdk_pid70922 00:19:28.344 Removing: /var/run/dpdk/spdk_pid71068 00:19:28.344 Removing: /var/run/dpdk/spdk_pid72565 00:19:28.344 Removing: /var/run/dpdk/spdk_pid72824 00:19:28.344 Removing: /var/run/dpdk/spdk_pid72970 00:19:28.344 Removing: /var/run/dpdk/spdk_pid74444 00:19:28.344 Removing: /var/run/dpdk/spdk_pid74928 00:19:28.344 Removing: /var/run/dpdk/spdk_pid75078 00:19:28.344 Removing: /var/run/dpdk/spdk_pid75222 00:19:28.344 Removing: /var/run/dpdk/spdk_pid75634 00:19:28.344 Removing: /var/run/dpdk/spdk_pid76355 00:19:28.344 Removing: /var/run/dpdk/spdk_pid76750 00:19:28.344 Removing: /var/run/dpdk/spdk_pid77439 00:19:28.344 Removing: /var/run/dpdk/spdk_pid77879 00:19:28.344 Removing: /var/run/dpdk/spdk_pid78634 00:19:28.344 Removing: /var/run/dpdk/spdk_pid79044 00:19:28.344 Removing: /var/run/dpdk/spdk_pid81002 00:19:28.344 Removing: /var/run/dpdk/spdk_pid81446 00:19:28.344 Removing: /var/run/dpdk/spdk_pid81886 00:19:28.344 Removing: /var/run/dpdk/spdk_pid83971 00:19:28.344 Removing: /var/run/dpdk/spdk_pid84457 00:19:28.344 Removing: /var/run/dpdk/spdk_pid84983 00:19:28.344 Removing: /var/run/dpdk/spdk_pid86049 00:19:28.344 Removing: /var/run/dpdk/spdk_pid86379 00:19:28.344 Removing: /var/run/dpdk/spdk_pid87324 00:19:28.344 Removing: /var/run/dpdk/spdk_pid87647 00:19:28.344 Removing: /var/run/dpdk/spdk_pid88585 00:19:28.344 Removing: /var/run/dpdk/spdk_pid88914 00:19:28.344 Removing: /var/run/dpdk/spdk_pid89586 00:19:28.344 Removing: /var/run/dpdk/spdk_pid89872 00:19:28.604 Removing: /var/run/dpdk/spdk_pid89939 00:19:28.604 Removing: /var/run/dpdk/spdk_pid89987 00:19:28.604 Removing: /var/run/dpdk/spdk_pid90239 00:19:28.604 Removing: /var/run/dpdk/spdk_pid90423 00:19:28.604 Removing: /var/run/dpdk/spdk_pid90522 00:19:28.604 Removing: /var/run/dpdk/spdk_pid90620 00:19:28.604 Removing: /var/run/dpdk/spdk_pid90679 00:19:28.604 Removing: /var/run/dpdk/spdk_pid90704 00:19:28.604 Clean 00:19:28.604 09:56:53 -- common/autotest_common.sh@1453 -- # return 0 00:19:28.604 09:56:53 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:19:28.604 09:56:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:28.604 09:56:53 -- common/autotest_common.sh@10 -- # set +x 00:19:28.604 09:56:53 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:19:28.604 09:56:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:28.604 09:56:53 -- common/autotest_common.sh@10 -- # set +x 00:19:28.604 09:56:53 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:28.604 09:56:53 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:28.604 09:56:53 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:19:28.604 09:56:53 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:19:28.604 09:56:53 -- spdk/autotest.sh@398 -- # hostname 00:19:28.604 09:56:53 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:19:28.864 geninfo: WARNING: invalid characters removed from testname! 00:19:55.423 09:57:16 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:55.424 09:57:19 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:56.806 09:57:21 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:58.716 09:57:23 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:01.252 09:57:26 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:03.160 09:57:28 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:05.732 09:57:30 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:05.732 09:57:30 -- spdk/autorun.sh@1 -- $ timing_finish 00:20:05.732 09:57:30 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:20:05.732 09:57:30 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:05.732 09:57:30 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:20:05.732 09:57:30 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:05.732 + [[ -n 5430 ]] 00:20:05.732 + sudo kill 5430 00:20:05.742 [Pipeline] } 00:20:05.759 [Pipeline] // timeout 00:20:05.765 [Pipeline] } 00:20:05.780 [Pipeline] // stage 00:20:05.786 [Pipeline] } 00:20:05.801 [Pipeline] // catchError 00:20:05.811 [Pipeline] stage 00:20:05.813 [Pipeline] { (Stop VM) 00:20:05.827 [Pipeline] sh 00:20:06.111 + vagrant halt 00:20:08.652 ==> default: Halting domain... 00:20:16.787 [Pipeline] sh 00:20:17.071 + vagrant destroy -f 00:20:19.659 ==> default: Removing domain... 00:20:19.672 [Pipeline] sh 00:20:19.968 + mv output /var/jenkins/workspace/raid-vg-autotest_3/output 00:20:20.005 [Pipeline] } 00:20:20.016 [Pipeline] // stage 00:20:20.019 [Pipeline] } 00:20:20.028 [Pipeline] // dir 00:20:20.031 [Pipeline] } 00:20:20.042 [Pipeline] // wrap 00:20:20.047 [Pipeline] } 00:20:20.057 [Pipeline] // catchError 00:20:20.065 [Pipeline] stage 00:20:20.067 [Pipeline] { (Epilogue) 00:20:20.078 [Pipeline] sh 00:20:20.359 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:24.573 [Pipeline] catchError 00:20:24.575 [Pipeline] { 00:20:24.588 [Pipeline] sh 00:20:24.873 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:24.873 Artifacts sizes are good 00:20:24.883 [Pipeline] } 00:20:24.898 [Pipeline] // catchError 00:20:24.913 [Pipeline] archiveArtifacts 00:20:24.920 Archiving artifacts 00:20:25.044 [Pipeline] cleanWs 00:20:25.059 [WS-CLEANUP] Deleting project workspace... 00:20:25.059 [WS-CLEANUP] Deferred wipeout is used... 00:20:25.066 [WS-CLEANUP] done 00:20:25.068 [Pipeline] } 00:20:25.083 [Pipeline] // stage 00:20:25.088 [Pipeline] } 00:20:25.102 [Pipeline] // node 00:20:25.107 [Pipeline] End of Pipeline 00:20:25.143 Finished: SUCCESS